136
Digital Image Processing Week 1

Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

  • Upload
    builien

  • View
    236

  • Download
    1

Embed Size (px)

Citation preview

Page 1: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Evaluation

1(base) + 3 (course test) + 3 (lab activity)+ 3 (article presentation)

gt 45

Weeks 1 ndash 7 ndash coursesWeek 8 ndash (course) testWeeks 9 ndash 12 ndash labWeeks 13 ndash 14 ndash article lab evaluationWeek 15 or 16 ndash article presentation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Bibliography

bull RC Gonzales RE Woods Digital Image Processing Prentice Hall2008 3rd ed

bull RC Gonzales RE Woods SL Eddins Digital Image Processing Using MATLAB Prentice Hall 2003

bull httpwwwimageprocessingplacecom

bullM Petrou C Petrou Image Processing the fundamentals John Wiley 2010 2nd ed

bull W Burger MJ Burge Digital Image Processing An Algorithmic Introduction Using Java Springer 2008

Digital Image ProcessingDigital Image Processing

Week 1Week 1

bull Image Processing Toolbox (httpwwwmathworkscomproductsimage)

bull C Solomon T Breckon Fundamentals of Digital Image Processing A Practical Approach with Examples in Matlab Wiley-Blackwell 2011

bullWK Pratt Digital Image Processing Wiley-Interscience 2007

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Meet LenaThe First Lady of the Internet

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lenna Soderberg (Sjoumloumlblom) and Jeff Seideman taken in May 1997Imaging Science amp Technology Conference

Digital Image ProcessingDigital Image Processing

Week 1Week 1

What is Digital Image Processing

f(xy) = intensity gray level of the image at spatial point (xy)

x y f(xy) ndash finite discrete quantities -gt digital image

Digital Image Processing = processing digital images by means of a digital computer

A digital image is composed of a finite number of elements (location value of intensity)

These elements are called picture elements image elements pels pixels

( )i j ijx y f

3 f D

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image processing is not limited to the visual band of the electromagnetic (EM) spectrum

Image processing gamma to radio waves ultrasound electron microscopy computer-generated images

image processing image analysis computer vision

Image processing = discipline in which both the input and the output of a process are images

Computer Vision = use computer to emulate human vision (AI) ndashlearning making inferences and take actions based on visual inputs

Image analysis (image understanding) = segmentation partitioning images into regions or objects

(link between image processing and image analysis)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Distinction between image processing image analysis computer vision

low-level mid-level high-level processes

Low-level processes image preprocessing to reduce noise contrast enhancement image sharpening both input and output are images

Mid-level processes segmentation partitioning images into regions or objects description of the objects for computer processing classificationrecognition of individual objects inputs are generally images outputs are attributes extracted from the input image (eg edges contours identity of individual objects)

High-level processes ldquomaking senserdquo of a set of recognized objects performing the cognitive functions associated with vision

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing (Gonzalez + Woods) =

processes whose inputs and outputs are images +

processes that extract attributes from images recognition of individual objects

(low- and mid-level processes)

Example

automated analysis of text = acquiring an image containing text preprocessing the image (enhancement sharpening) extracting (segmenting) the individual characaters describing the characters in a form suitable for computer processing recognition of individual characters

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The Origins of DIP

Newspaper industry pictures were sent by submarine cable between London and New York

Before Bartlane cable picture transmission system (early 1920s) ndash 1 week

With Bartlane system less than 3 hours

Specialized printing equipment coded pictures for cable transmission and reconstructed them at the receiving end (1920s -5 distict levels of gray 1929 ndash 15 levels)

This example is not DIP the computer is not involved

DIP is linked and devolps in the same rhythm as digital computers (data storage display and transmission)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

A digital pictureproduced in 1921from a coded tapeby a telegraph printer withspecial type faces(McFarlane)

A digital picturemade in 1922from a tapepunched after the signals hadcrossed the Atlantic twice(McFarlane)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

1964 Jet Propulsion Laboratory (Pasadena California) processed pictures of the moon transmitted by Ranger 7 (corrected image distortions)

The first picture of the moon by a US spacecraft Ranger 7 took this image July 31 1964 about 17 minutes before impacting the lunar surface(Courtesy of NASA)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

1960-1970 ndash image processing techniques were used in medical image remote Earth resources observations astronomy

1970s ndash invention of CAT (computerized axial tomography)

CAT is a process in which a ring of detectors encircles an object (patient) and a X-ray source concentric with the detector ring rotates about the object The X-ray passes through the patient and the resulted waves are collected at the opposite end by the detectors As the source rotates the procedure is repeated Tomography consists of algorithms that use the sense data to construct an image that represent a ldquoslicerdquo through the object Motion of the object in a direction perpendicular to the ring of detectors produces a set of ldquoslicesrdquo which can be assembled in a 3D information of the inside of the object

Digital Image ProcessingDigital Image Processing

Week 1Week 1

loz geographers use DIP to study pollution patterns from aerial and satellite imagery

loz archeology ndash DIP allowed restoring blurred pictures that recorded rare artifacts lost or damaged after being photographed

loz physics ndash enhance images of experiments (high-energy plasmas electron microscopy)

loz astronomy biology nuclear medicine law enforcement industry

DIP ndash used in solving problems dealing with machine perception ndashextracting from an image information suitable for computer processing (statistical moments Fourier transform coefficients hellip)

loz automatic character recognition industrial machine vision for product assembly and inspection military recognizance automatic processing of fingerprints machine processing of aerial and satellite imagery for weather prediction Internet

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of Fields that Use DIP

Images can be classified according to their sources (visual X-ray hellip)

Energy sources for images electromagnetic energy spectrum acoustic ultrasonic electronic computer- generated

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Electromagnetic waves can be thought as propagating sinusoidal

waves of different wavelength or as a stream of massless particles

each moving in a wavelike pattern with the speed of light Each

massless particle contains a certain amount (bundle) of energy Each

bundle of energy is called a photon If spectral bands are grouped

according to energy per photon we obtain the spectrum shown in the

image above ranging from gamma-rays (highest energy) to radio

waves (lowest energy)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gamma-Ray Imaging

Nuclear medicine astronomical observations

Nuclear medicine

the approach is to inject a patient with a radioactive isotope that emits gamma rays as it decays

Images are produced from the emissions collected by gamma-ray detectors

Images of this sort are used to locate sites of bone pathology (infections tumors)

PET (positron emision tomography) ndash the patient is given a radioactive isotope that emits positrons as it decays

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of gamma-ray imaging

Bone scan PET image

Digital Image ProcessingDigital Image Processing

Week 1Week 1

X-ray imaging

Medical diagnosticindustry astronomy

A X-ray tube is a vacuum tube with a cathode and an anode The cathode is heated causing free electrons to be released The electrons flows at high speed to the positively charged anode When the electrons strike a nucleus energy is released in the form of a X-ray radiation The energy (penetreting power) of the X-rays is controlled by a voltage applied across the anode and by a curent applied to the filament in the cathode

The intensity of the X-rays is modified by absorbtion as they pass through the patient and the resulting energy falling develops it much in the same way that light develops photographic film

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Angiography = contrast-enhancement radiography

Angiograms = images of blood vessels

A catheter is inserted into an artery or vein in the groin The catheter is threaded into the blood vessel and guided to the area to be studied When it reaches the area to be studied a X-ray contrast medium is injected through the catheter This enhances contrast of the blood vessels and enables radiologist to see anyirregularities or blockages

X-rays are used in CAT (computerized axial tomography)

X-rays used in industrial processes (examine circuit boards for flows in manifacturing)

Industrial CAT scans are useful when the parts can be penetreted by X-rays

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of X-ray imaging

Chest X-rayAortic angiogram

Head CT Cygnus LoopCircuit boards

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Ultraviolet Band

Litography industrial inspection microscopy biological imaging astronomical observations

Ultraviolet light is used in fluorescence microscopy

Ultraviolet light is not visible to human eye but when a photon of ultraviolet radiation collides with an electron in an atom of a fluorescent material it elevates the electron to a higher energy level After that the electron relaxes to a lower level and emits light in the form of a lower-energy photon in the visible (red) light regionFluorescence = emission of light by a substance that has absorbed light or

other electromagentic radiation of a different wavelengthFlourescence microscope = uses an excitation light to irradiate a prepared specimen

and then it separates the much weaker radiating fluorescent light from the brighter excitation light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Visible and Infrared Bands

Light microscopy astronomy remote sensing industry law enforcement

LANDSAT satellite ndash obtained and transmitted images of the Earth from space for purpose of monitoring environmental conditions on the planet

Weather observations and prediction produce major applications of multispectral image from satellites

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Table 1 ndash Thematic bands in NASArsquos LANDSAT satellite

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Satellite images of Washington DC area in spectral bands of the Table 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of light microscopy

Taxol (anticancer agent)magnified 250X

Cholesterol(40X)

Microprocessor(60X)

Nickel oxidethin film(600X)

Surface of audio CD(1750X)

Organicsuperconductor(450X)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Automated visual inspection of manufactured goods

a bc de f

a ndash a circuit board controllerb ndash packaged pillesc ndash bottlesd ndash air bubbles in clear-plastic producte ndash cerealf ndash image of intraocular implant

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Microwave Band

The dominant aplication of imaging in the microwave band ndash radar

bull radar has the ability to collect data over virtually any region at any time regardless of weather or ambient light conditions

bull some radar waves can penetrate clouds under certain conditions canpenetrate vegetation ice dry sand

bull sometimes radar is the only way to explore inaccessible regions of theEarthrsquos surface

An imaging radar works like a flash camera it provides its own illumination(microwave pulses) to light an area on the ground and take a snapshot image Instead of a camera lens a radar uses an antenna and a digital device torecord the images In a radar image one can see only the microwave energythat was reflected back toward the radar antenna

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Spaceborne radar image of mountains in southeast Tibet

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Radio Band

medicine astronomy

MRI = Magnetic Resonance Imaging

This technique places the patient in a powerful magnet and passes short pulses of radio waves through his or her body

Each pulse causes a responding pulse of radio waves to be emited by the patinet tissues

The location from which these signals originate and their strength are determined by a computer which produces a 2D picture of a section of the patient

Digital Image ProcessingDigital Image Processing

Week 1Week 1

MRI images of a human knee (left) and spine (right)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Images of the Crab Pulsar covering the electromagnetic spectrum

Gamma X-ray Optical Infrared Radio

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Other Imaging Modalities

acoustic imaging electron microscopy synthetic (computer-generated) imaging

Imaging using sound geological explorations industry medicine

Mineral and oil exploration

For image acquisition over land one of the main approaches is to use a large truck an a large flat steel plate The plate is pressed on the ground by the truck and the truck is vibratedthrough a frequency spectrum up to 100 Hz The strength and the speed of the returning sound waves are determined by the composition of the Earth below the surface These are analysed by a computer and images are generated from the resulting analysis

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - iris

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - fingerprint

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Face detection and recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gender identification

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image morphing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Fundamental Steps in DIP

methods whose input and output are images

methods whose inputs are images but whose outputs are attributes extracted from those images

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are images

bull image acquisition

bull image filtering and enhancement

bull image restoration

bull color image processing

bull wavelets and multiresolution processing

bull compression

bull morphological processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are attributes

bull morphological processing

bull segmentation

bull representation and description

bull object recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image acquisition - may involve preprocessing such as scaling

Image enhancement

bull manipulating an image so that the result is more suitable than the original for a specific operation

bull enhancement is problem oriented

bull there is no general sbquotheoryrsquo of image enhancement

bull enhancement use subjective methods for image emprovement

bull enhancement is based on human subjective preferences regarding what is a bdquogoodrdquo enhancement result

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image restoration

bull improving the appearance of an image

bull restoration is objective - the techniques for restoration are based on mathematical or probabilistic models of image degradation

Color image processing

bull fundamental concept in color models

bull basic color processing in a digital domain

Wavelets and multiresolution processing

representing images in various degree of resolution

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Compression

reducing the storage required to save an image or the bandwidth required to transmit it

Morphological processing

bull tools for extracting image components that are useful in the representation and description of shape

bull a transition from processes that output images to processes that outputimage attributes

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Segmentation

bull partitioning an image into its constituents parts or objects

bull autonomous segmentation is one of the most difficult tasks of DIP

bull the more accurate the segmentation the more likley recognition is to succeed

Representation and description (almost always follows segmentation)

bull segmentation produces either the boundary of a region or all the poits in the region itself

bull converting the data produced by segmentation to a form suitable for computer processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

bull boundary representation the focus is on external shape characteristics such as corners or inflections

bull complete region the focus is on internal properties such as texture or skeletal shape

bull description is also called feature extraction ndash extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another

Object recognition

the process of assigning a label (eg bdquovehiclerdquo) to an object based on itsdescriptors

Knowledge database

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Simplified diagramof a cross sectionof the human eye

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Three membranes enclose the eye the cornea and sclera outer cover the choroid the retina

The cornea is a tough transparent tissue that covers the anterior surface of the eye

Continuous with the cornea the sclera is an opaque membrane that enclosesthe remainder of the optic globe

The choroid lies directly below the sclera This membrane contains a network of blood vessels (major nutrition of the eye) The choroid is pigmented and help reduce the amount of light entering the eyeThe choroid is divided (at its anterior extreme) into the ciliary body and the irisThe iris contracts and expands to control the amount of light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to ciliary body (60-70 water 6 fat protein) The lens is colored in slightly yellow The lens absorbs approximatively 8 of the visiblelight (infrared and ultraviolet light are absorbed by proteins in lens)

The innermost membrane is the retina When the eye is proper focusedlight from an object outside the eye is imaged on the retinaVision is possible because of the distribution of discrete light receptors on the surface of the retina cones and rods (6-7 milion cones 75-150 milion rods)

Cones located in the central part of the retina (fovea) they are sensitive to colors vision of detail each cone is link to its own nervecone vision = photopic or bright-light vision

Fovea = the place where the image of the object of interest falls on

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Rods distributed over al the retina surface several rods are contected toa single nerve not specialized in detail vision serve to give a general overall picture of the filed of viewnot involved in color visionsensitive to low level of illumination

Blind spot region without receptors

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 2: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Evaluation

1(base) + 3 (course test) + 3 (lab activity)+ 3 (article presentation)

gt 45

Weeks 1 ndash 7 ndash coursesWeek 8 ndash (course) testWeeks 9 ndash 12 ndash labWeeks 13 ndash 14 ndash article lab evaluationWeek 15 or 16 ndash article presentation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Bibliography

bull RC Gonzales RE Woods Digital Image Processing Prentice Hall2008 3rd ed

bull RC Gonzales RE Woods SL Eddins Digital Image Processing Using MATLAB Prentice Hall 2003

bull httpwwwimageprocessingplacecom

bullM Petrou C Petrou Image Processing the fundamentals John Wiley 2010 2nd ed

bull W Burger MJ Burge Digital Image Processing An Algorithmic Introduction Using Java Springer 2008

Digital Image ProcessingDigital Image Processing

Week 1Week 1

bull Image Processing Toolbox (httpwwwmathworkscomproductsimage)

bull C Solomon T Breckon Fundamentals of Digital Image Processing A Practical Approach with Examples in Matlab Wiley-Blackwell 2011

bullWK Pratt Digital Image Processing Wiley-Interscience 2007

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Meet LenaThe First Lady of the Internet

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lenna Soderberg (Sjoumloumlblom) and Jeff Seideman taken in May 1997Imaging Science amp Technology Conference

Digital Image ProcessingDigital Image Processing

Week 1Week 1

What is Digital Image Processing

f(xy) = intensity gray level of the image at spatial point (xy)

x y f(xy) ndash finite discrete quantities -gt digital image

Digital Image Processing = processing digital images by means of a digital computer

A digital image is composed of a finite number of elements (location value of intensity)

These elements are called picture elements image elements pels pixels

( )i j ijx y f

3 f D

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image processing is not limited to the visual band of the electromagnetic (EM) spectrum

Image processing gamma to radio waves ultrasound electron microscopy computer-generated images

image processing image analysis computer vision

Image processing = discipline in which both the input and the output of a process are images

Computer Vision = use computer to emulate human vision (AI) ndashlearning making inferences and take actions based on visual inputs

Image analysis (image understanding) = segmentation partitioning images into regions or objects

(link between image processing and image analysis)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Distinction between image processing image analysis computer vision

low-level mid-level high-level processes

Low-level processes image preprocessing to reduce noise contrast enhancement image sharpening both input and output are images

Mid-level processes segmentation partitioning images into regions or objects description of the objects for computer processing classificationrecognition of individual objects inputs are generally images outputs are attributes extracted from the input image (eg edges contours identity of individual objects)

High-level processes ldquomaking senserdquo of a set of recognized objects performing the cognitive functions associated with vision

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing (Gonzalez + Woods) =

processes whose inputs and outputs are images +

processes that extract attributes from images recognition of individual objects

(low- and mid-level processes)

Example

automated analysis of text = acquiring an image containing text preprocessing the image (enhancement sharpening) extracting (segmenting) the individual characaters describing the characters in a form suitable for computer processing recognition of individual characters

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The Origins of DIP

Newspaper industry pictures were sent by submarine cable between London and New York

Before Bartlane cable picture transmission system (early 1920s) ndash 1 week

With Bartlane system less than 3 hours

Specialized printing equipment coded pictures for cable transmission and reconstructed them at the receiving end (1920s -5 distict levels of gray 1929 ndash 15 levels)

This example is not DIP the computer is not involved

DIP is linked and devolps in the same rhythm as digital computers (data storage display and transmission)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

A digital pictureproduced in 1921from a coded tapeby a telegraph printer withspecial type faces(McFarlane)

A digital picturemade in 1922from a tapepunched after the signals hadcrossed the Atlantic twice(McFarlane)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

1964 Jet Propulsion Laboratory (Pasadena California) processed pictures of the moon transmitted by Ranger 7 (corrected image distortions)

The first picture of the moon by a US spacecraft Ranger 7 took this image July 31 1964 about 17 minutes before impacting the lunar surface(Courtesy of NASA)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

1960-1970 ndash image processing techniques were used in medical image remote Earth resources observations astronomy

1970s ndash invention of CAT (computerized axial tomography)

CAT is a process in which a ring of detectors encircles an object (patient) and a X-ray source concentric with the detector ring rotates about the object The X-ray passes through the patient and the resulted waves are collected at the opposite end by the detectors As the source rotates the procedure is repeated Tomography consists of algorithms that use the sense data to construct an image that represent a ldquoslicerdquo through the object Motion of the object in a direction perpendicular to the ring of detectors produces a set of ldquoslicesrdquo which can be assembled in a 3D information of the inside of the object

Digital Image ProcessingDigital Image Processing

Week 1Week 1

loz geographers use DIP to study pollution patterns from aerial and satellite imagery

loz archeology ndash DIP allowed restoring blurred pictures that recorded rare artifacts lost or damaged after being photographed

loz physics ndash enhance images of experiments (high-energy plasmas electron microscopy)

loz astronomy biology nuclear medicine law enforcement industry

DIP ndash used in solving problems dealing with machine perception ndashextracting from an image information suitable for computer processing (statistical moments Fourier transform coefficients hellip)

loz automatic character recognition industrial machine vision for product assembly and inspection military recognizance automatic processing of fingerprints machine processing of aerial and satellite imagery for weather prediction Internet

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of Fields that Use DIP

Images can be classified according to their sources (visual X-ray hellip)

Energy sources for images electromagnetic energy spectrum acoustic ultrasonic electronic computer- generated

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Electromagnetic waves can be thought as propagating sinusoidal

waves of different wavelength or as a stream of massless particles

each moving in a wavelike pattern with the speed of light Each

massless particle contains a certain amount (bundle) of energy Each

bundle of energy is called a photon If spectral bands are grouped

according to energy per photon we obtain the spectrum shown in the

image above ranging from gamma-rays (highest energy) to radio

waves (lowest energy)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gamma-Ray Imaging

Nuclear medicine astronomical observations

Nuclear medicine

the approach is to inject a patient with a radioactive isotope that emits gamma rays as it decays

Images are produced from the emissions collected by gamma-ray detectors

Images of this sort are used to locate sites of bone pathology (infections tumors)

PET (positron emision tomography) ndash the patient is given a radioactive isotope that emits positrons as it decays

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of gamma-ray imaging

Bone scan PET image

Digital Image ProcessingDigital Image Processing

Week 1Week 1

X-ray imaging

Medical diagnosticindustry astronomy

A X-ray tube is a vacuum tube with a cathode and an anode The cathode is heated causing free electrons to be released The electrons flows at high speed to the positively charged anode When the electrons strike a nucleus energy is released in the form of a X-ray radiation The energy (penetreting power) of the X-rays is controlled by a voltage applied across the anode and by a curent applied to the filament in the cathode

The intensity of the X-rays is modified by absorbtion as they pass through the patient and the resulting energy falling develops it much in the same way that light develops photographic film

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Angiography = contrast-enhancement radiography

Angiograms = images of blood vessels

A catheter is inserted into an artery or vein in the groin The catheter is threaded into the blood vessel and guided to the area to be studied When it reaches the area to be studied a X-ray contrast medium is injected through the catheter This enhances contrast of the blood vessels and enables radiologist to see anyirregularities or blockages

X-rays are used in CAT (computerized axial tomography)

X-rays used in industrial processes (examine circuit boards for flows in manifacturing)

Industrial CAT scans are useful when the parts can be penetreted by X-rays

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of X-ray imaging

Chest X-rayAortic angiogram

Head CT Cygnus LoopCircuit boards

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Ultraviolet Band

Litography industrial inspection microscopy biological imaging astronomical observations

Ultraviolet light is used in fluorescence microscopy

Ultraviolet light is not visible to human eye but when a photon of ultraviolet radiation collides with an electron in an atom of a fluorescent material it elevates the electron to a higher energy level After that the electron relaxes to a lower level and emits light in the form of a lower-energy photon in the visible (red) light regionFluorescence = emission of light by a substance that has absorbed light or

other electromagentic radiation of a different wavelengthFlourescence microscope = uses an excitation light to irradiate a prepared specimen

and then it separates the much weaker radiating fluorescent light from the brighter excitation light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Visible and Infrared Bands

Light microscopy astronomy remote sensing industry law enforcement

LANDSAT satellite ndash obtained and transmitted images of the Earth from space for purpose of monitoring environmental conditions on the planet

Weather observations and prediction produce major applications of multispectral image from satellites

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Table 1 ndash Thematic bands in NASArsquos LANDSAT satellite

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Satellite images of Washington DC area in spectral bands of the Table 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of light microscopy

Taxol (anticancer agent)magnified 250X

Cholesterol(40X)

Microprocessor(60X)

Nickel oxidethin film(600X)

Surface of audio CD(1750X)

Organicsuperconductor(450X)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Automated visual inspection of manufactured goods

a bc de f

a ndash a circuit board controllerb ndash packaged pillesc ndash bottlesd ndash air bubbles in clear-plastic producte ndash cerealf ndash image of intraocular implant

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Microwave Band

The dominant aplication of imaging in the microwave band ndash radar

bull radar has the ability to collect data over virtually any region at any time regardless of weather or ambient light conditions

bull some radar waves can penetrate clouds under certain conditions canpenetrate vegetation ice dry sand

bull sometimes radar is the only way to explore inaccessible regions of theEarthrsquos surface

An imaging radar works like a flash camera it provides its own illumination(microwave pulses) to light an area on the ground and take a snapshot image Instead of a camera lens a radar uses an antenna and a digital device torecord the images In a radar image one can see only the microwave energythat was reflected back toward the radar antenna

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Spaceborne radar image of mountains in southeast Tibet

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Radio Band

medicine astronomy

MRI = Magnetic Resonance Imaging

This technique places the patient in a powerful magnet and passes short pulses of radio waves through his or her body

Each pulse causes a responding pulse of radio waves to be emited by the patinet tissues

The location from which these signals originate and their strength are determined by a computer which produces a 2D picture of a section of the patient

Digital Image ProcessingDigital Image Processing

Week 1Week 1

MRI images of a human knee (left) and spine (right)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Images of the Crab Pulsar covering the electromagnetic spectrum

Gamma X-ray Optical Infrared Radio

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Other Imaging Modalities

acoustic imaging electron microscopy synthetic (computer-generated) imaging

Imaging using sound geological explorations industry medicine

Mineral and oil exploration

For image acquisition over land one of the main approaches is to use a large truck an a large flat steel plate The plate is pressed on the ground by the truck and the truck is vibratedthrough a frequency spectrum up to 100 Hz The strength and the speed of the returning sound waves are determined by the composition of the Earth below the surface These are analysed by a computer and images are generated from the resulting analysis

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - iris

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - fingerprint

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Face detection and recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gender identification

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image morphing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Fundamental Steps in DIP

methods whose input and output are images

methods whose inputs are images but whose outputs are attributes extracted from those images

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are images

bull image acquisition

bull image filtering and enhancement

bull image restoration

bull color image processing

bull wavelets and multiresolution processing

bull compression

bull morphological processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are attributes

bull morphological processing

bull segmentation

bull representation and description

bull object recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image acquisition - may involve preprocessing such as scaling

Image enhancement

bull manipulating an image so that the result is more suitable than the original for a specific operation

bull enhancement is problem oriented

bull there is no general sbquotheoryrsquo of image enhancement

bull enhancement use subjective methods for image emprovement

bull enhancement is based on human subjective preferences regarding what is a bdquogoodrdquo enhancement result

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image restoration

bull improving the appearance of an image

bull restoration is objective - the techniques for restoration are based on mathematical or probabilistic models of image degradation

Color image processing

bull fundamental concept in color models

bull basic color processing in a digital domain

Wavelets and multiresolution processing

representing images in various degree of resolution

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Compression

reducing the storage required to save an image or the bandwidth required to transmit it

Morphological processing

bull tools for extracting image components that are useful in the representation and description of shape

bull a transition from processes that output images to processes that outputimage attributes

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Segmentation

bull partitioning an image into its constituents parts or objects

bull autonomous segmentation is one of the most difficult tasks of DIP

bull the more accurate the segmentation the more likley recognition is to succeed

Representation and description (almost always follows segmentation)

bull segmentation produces either the boundary of a region or all the poits in the region itself

bull converting the data produced by segmentation to a form suitable for computer processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

bull boundary representation the focus is on external shape characteristics such as corners or inflections

bull complete region the focus is on internal properties such as texture or skeletal shape

bull description is also called feature extraction ndash extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another

Object recognition

the process of assigning a label (eg bdquovehiclerdquo) to an object based on itsdescriptors

Knowledge database

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Simplified diagramof a cross sectionof the human eye

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Three membranes enclose the eye the cornea and sclera outer cover the choroid the retina

The cornea is a tough transparent tissue that covers the anterior surface of the eye

Continuous with the cornea the sclera is an opaque membrane that enclosesthe remainder of the optic globe

The choroid lies directly below the sclera This membrane contains a network of blood vessels (major nutrition of the eye) The choroid is pigmented and help reduce the amount of light entering the eyeThe choroid is divided (at its anterior extreme) into the ciliary body and the irisThe iris contracts and expands to control the amount of light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to ciliary body (60-70 water 6 fat protein) The lens is colored in slightly yellow The lens absorbs approximatively 8 of the visiblelight (infrared and ultraviolet light are absorbed by proteins in lens)

The innermost membrane is the retina When the eye is proper focusedlight from an object outside the eye is imaged on the retinaVision is possible because of the distribution of discrete light receptors on the surface of the retina cones and rods (6-7 milion cones 75-150 milion rods)

Cones located in the central part of the retina (fovea) they are sensitive to colors vision of detail each cone is link to its own nervecone vision = photopic or bright-light vision

Fovea = the place where the image of the object of interest falls on

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Rods distributed over al the retina surface several rods are contected toa single nerve not specialized in detail vision serve to give a general overall picture of the filed of viewnot involved in color visionsensitive to low level of illumination

Blind spot region without receptors

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 3: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Bibliography

bull RC Gonzales RE Woods Digital Image Processing Prentice Hall2008 3rd ed

bull RC Gonzales RE Woods SL Eddins Digital Image Processing Using MATLAB Prentice Hall 2003

bull httpwwwimageprocessingplacecom

bullM Petrou C Petrou Image Processing the fundamentals John Wiley 2010 2nd ed

bull W Burger MJ Burge Digital Image Processing An Algorithmic Introduction Using Java Springer 2008

Digital Image ProcessingDigital Image Processing

Week 1Week 1

bull Image Processing Toolbox (httpwwwmathworkscomproductsimage)

bull C Solomon T Breckon Fundamentals of Digital Image Processing A Practical Approach with Examples in Matlab Wiley-Blackwell 2011

bullWK Pratt Digital Image Processing Wiley-Interscience 2007

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Meet LenaThe First Lady of the Internet

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lenna Soderberg (Sjoumloumlblom) and Jeff Seideman taken in May 1997Imaging Science amp Technology Conference

Digital Image ProcessingDigital Image Processing

Week 1Week 1

What is Digital Image Processing

f(xy) = intensity gray level of the image at spatial point (xy)

x y f(xy) ndash finite discrete quantities -gt digital image

Digital Image Processing = processing digital images by means of a digital computer

A digital image is composed of a finite number of elements (location value of intensity)

These elements are called picture elements image elements pels pixels

( )i j ijx y f

3 f D

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image processing is not limited to the visual band of the electromagnetic (EM) spectrum

Image processing gamma to radio waves ultrasound electron microscopy computer-generated images

image processing image analysis computer vision

Image processing = discipline in which both the input and the output of a process are images

Computer Vision = use computer to emulate human vision (AI) ndashlearning making inferences and take actions based on visual inputs

Image analysis (image understanding) = segmentation partitioning images into regions or objects

(link between image processing and image analysis)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Distinction between image processing image analysis computer vision

low-level mid-level high-level processes

Low-level processes image preprocessing to reduce noise contrast enhancement image sharpening both input and output are images

Mid-level processes segmentation partitioning images into regions or objects description of the objects for computer processing classificationrecognition of individual objects inputs are generally images outputs are attributes extracted from the input image (eg edges contours identity of individual objects)

High-level processes ldquomaking senserdquo of a set of recognized objects performing the cognitive functions associated with vision

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing (Gonzalez + Woods) =

processes whose inputs and outputs are images +

processes that extract attributes from images recognition of individual objects

(low- and mid-level processes)

Example

automated analysis of text = acquiring an image containing text preprocessing the image (enhancement sharpening) extracting (segmenting) the individual characaters describing the characters in a form suitable for computer processing recognition of individual characters

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The Origins of DIP

Newspaper industry pictures were sent by submarine cable between London and New York

Before Bartlane cable picture transmission system (early 1920s) ndash 1 week

With Bartlane system less than 3 hours

Specialized printing equipment coded pictures for cable transmission and reconstructed them at the receiving end (1920s -5 distict levels of gray 1929 ndash 15 levels)

This example is not DIP the computer is not involved

DIP is linked and devolps in the same rhythm as digital computers (data storage display and transmission)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

A digital pictureproduced in 1921from a coded tapeby a telegraph printer withspecial type faces(McFarlane)

A digital picturemade in 1922from a tapepunched after the signals hadcrossed the Atlantic twice(McFarlane)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

1964 Jet Propulsion Laboratory (Pasadena California) processed pictures of the moon transmitted by Ranger 7 (corrected image distortions)

The first picture of the moon by a US spacecraft Ranger 7 took this image July 31 1964 about 17 minutes before impacting the lunar surface(Courtesy of NASA)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

1960-1970 ndash image processing techniques were used in medical image remote Earth resources observations astronomy

1970s ndash invention of CAT (computerized axial tomography)

CAT is a process in which a ring of detectors encircles an object (patient) and a X-ray source concentric with the detector ring rotates about the object The X-ray passes through the patient and the resulted waves are collected at the opposite end by the detectors As the source rotates the procedure is repeated Tomography consists of algorithms that use the sense data to construct an image that represent a ldquoslicerdquo through the object Motion of the object in a direction perpendicular to the ring of detectors produces a set of ldquoslicesrdquo which can be assembled in a 3D information of the inside of the object

Digital Image ProcessingDigital Image Processing

Week 1Week 1

loz geographers use DIP to study pollution patterns from aerial and satellite imagery

loz archeology ndash DIP allowed restoring blurred pictures that recorded rare artifacts lost or damaged after being photographed

loz physics ndash enhance images of experiments (high-energy plasmas electron microscopy)

loz astronomy biology nuclear medicine law enforcement industry

DIP ndash used in solving problems dealing with machine perception ndashextracting from an image information suitable for computer processing (statistical moments Fourier transform coefficients hellip)

loz automatic character recognition industrial machine vision for product assembly and inspection military recognizance automatic processing of fingerprints machine processing of aerial and satellite imagery for weather prediction Internet

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of Fields that Use DIP

Images can be classified according to their sources (visual X-ray hellip)

Energy sources for images electromagnetic energy spectrum acoustic ultrasonic electronic computer- generated

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Electromagnetic waves can be thought as propagating sinusoidal

waves of different wavelength or as a stream of massless particles

each moving in a wavelike pattern with the speed of light Each

massless particle contains a certain amount (bundle) of energy Each

bundle of energy is called a photon If spectral bands are grouped

according to energy per photon we obtain the spectrum shown in the

image above ranging from gamma-rays (highest energy) to radio

waves (lowest energy)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gamma-Ray Imaging

Nuclear medicine astronomical observations

Nuclear medicine

the approach is to inject a patient with a radioactive isotope that emits gamma rays as it decays

Images are produced from the emissions collected by gamma-ray detectors

Images of this sort are used to locate sites of bone pathology (infections tumors)

PET (positron emision tomography) ndash the patient is given a radioactive isotope that emits positrons as it decays

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of gamma-ray imaging

Bone scan PET image

Digital Image ProcessingDigital Image Processing

Week 1Week 1

X-ray imaging

Medical diagnosticindustry astronomy

A X-ray tube is a vacuum tube with a cathode and an anode The cathode is heated causing free electrons to be released The electrons flows at high speed to the positively charged anode When the electrons strike a nucleus energy is released in the form of a X-ray radiation The energy (penetreting power) of the X-rays is controlled by a voltage applied across the anode and by a curent applied to the filament in the cathode

The intensity of the X-rays is modified by absorbtion as they pass through the patient and the resulting energy falling develops it much in the same way that light develops photographic film

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Angiography = contrast-enhancement radiography

Angiograms = images of blood vessels

A catheter is inserted into an artery or vein in the groin The catheter is threaded into the blood vessel and guided to the area to be studied When it reaches the area to be studied a X-ray contrast medium is injected through the catheter This enhances contrast of the blood vessels and enables radiologist to see anyirregularities or blockages

X-rays are used in CAT (computerized axial tomography)

X-rays used in industrial processes (examine circuit boards for flows in manifacturing)

Industrial CAT scans are useful when the parts can be penetreted by X-rays

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of X-ray imaging

Chest X-rayAortic angiogram

Head CT Cygnus LoopCircuit boards

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Ultraviolet Band

Litography industrial inspection microscopy biological imaging astronomical observations

Ultraviolet light is used in fluorescence microscopy

Ultraviolet light is not visible to human eye but when a photon of ultraviolet radiation collides with an electron in an atom of a fluorescent material it elevates the electron to a higher energy level After that the electron relaxes to a lower level and emits light in the form of a lower-energy photon in the visible (red) light regionFluorescence = emission of light by a substance that has absorbed light or

other electromagentic radiation of a different wavelengthFlourescence microscope = uses an excitation light to irradiate a prepared specimen

and then it separates the much weaker radiating fluorescent light from the brighter excitation light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Visible and Infrared Bands

Light microscopy astronomy remote sensing industry law enforcement

LANDSAT satellite ndash obtained and transmitted images of the Earth from space for purpose of monitoring environmental conditions on the planet

Weather observations and prediction produce major applications of multispectral image from satellites

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Table 1 ndash Thematic bands in NASArsquos LANDSAT satellite

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Satellite images of Washington DC area in spectral bands of the Table 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of light microscopy

Taxol (anticancer agent)magnified 250X

Cholesterol(40X)

Microprocessor(60X)

Nickel oxidethin film(600X)

Surface of audio CD(1750X)

Organicsuperconductor(450X)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Automated visual inspection of manufactured goods

a bc de f

a ndash a circuit board controllerb ndash packaged pillesc ndash bottlesd ndash air bubbles in clear-plastic producte ndash cerealf ndash image of intraocular implant

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Microwave Band

The dominant aplication of imaging in the microwave band ndash radar

bull radar has the ability to collect data over virtually any region at any time regardless of weather or ambient light conditions

bull some radar waves can penetrate clouds under certain conditions canpenetrate vegetation ice dry sand

bull sometimes radar is the only way to explore inaccessible regions of theEarthrsquos surface

An imaging radar works like a flash camera it provides its own illumination(microwave pulses) to light an area on the ground and take a snapshot image Instead of a camera lens a radar uses an antenna and a digital device torecord the images In a radar image one can see only the microwave energythat was reflected back toward the radar antenna

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Spaceborne radar image of mountains in southeast Tibet

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Radio Band

medicine astronomy

MRI = Magnetic Resonance Imaging

This technique places the patient in a powerful magnet and passes short pulses of radio waves through his or her body

Each pulse causes a responding pulse of radio waves to be emited by the patinet tissues

The location from which these signals originate and their strength are determined by a computer which produces a 2D picture of a section of the patient

Digital Image ProcessingDigital Image Processing

Week 1Week 1

MRI images of a human knee (left) and spine (right)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Images of the Crab Pulsar covering the electromagnetic spectrum

Gamma X-ray Optical Infrared Radio

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Other Imaging Modalities

acoustic imaging electron microscopy synthetic (computer-generated) imaging

Imaging using sound geological explorations industry medicine

Mineral and oil exploration

For image acquisition over land one of the main approaches is to use a large truck an a large flat steel plate The plate is pressed on the ground by the truck and the truck is vibratedthrough a frequency spectrum up to 100 Hz The strength and the speed of the returning sound waves are determined by the composition of the Earth below the surface These are analysed by a computer and images are generated from the resulting analysis

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - iris

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - fingerprint

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Face detection and recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gender identification

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image morphing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Fundamental Steps in DIP

methods whose input and output are images

methods whose inputs are images but whose outputs are attributes extracted from those images

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are images

bull image acquisition

bull image filtering and enhancement

bull image restoration

bull color image processing

bull wavelets and multiresolution processing

bull compression

bull morphological processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are attributes

bull morphological processing

bull segmentation

bull representation and description

bull object recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image acquisition - may involve preprocessing such as scaling

Image enhancement

bull manipulating an image so that the result is more suitable than the original for a specific operation

bull enhancement is problem oriented

bull there is no general sbquotheoryrsquo of image enhancement

bull enhancement use subjective methods for image emprovement

bull enhancement is based on human subjective preferences regarding what is a bdquogoodrdquo enhancement result

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image restoration

bull improving the appearance of an image

bull restoration is objective - the techniques for restoration are based on mathematical or probabilistic models of image degradation

Color image processing

bull fundamental concept in color models

bull basic color processing in a digital domain

Wavelets and multiresolution processing

representing images in various degree of resolution

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Compression

reducing the storage required to save an image or the bandwidth required to transmit it

Morphological processing

bull tools for extracting image components that are useful in the representation and description of shape

bull a transition from processes that output images to processes that outputimage attributes

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Segmentation

bull partitioning an image into its constituents parts or objects

bull autonomous segmentation is one of the most difficult tasks of DIP

bull the more accurate the segmentation the more likley recognition is to succeed

Representation and description (almost always follows segmentation)

bull segmentation produces either the boundary of a region or all the poits in the region itself

bull converting the data produced by segmentation to a form suitable for computer processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

bull boundary representation the focus is on external shape characteristics such as corners or inflections

bull complete region the focus is on internal properties such as texture or skeletal shape

bull description is also called feature extraction ndash extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another

Object recognition

the process of assigning a label (eg bdquovehiclerdquo) to an object based on itsdescriptors

Knowledge database

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Simplified diagramof a cross sectionof the human eye

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Three membranes enclose the eye the cornea and sclera outer cover the choroid the retina

The cornea is a tough transparent tissue that covers the anterior surface of the eye

Continuous with the cornea the sclera is an opaque membrane that enclosesthe remainder of the optic globe

The choroid lies directly below the sclera This membrane contains a network of blood vessels (major nutrition of the eye) The choroid is pigmented and help reduce the amount of light entering the eyeThe choroid is divided (at its anterior extreme) into the ciliary body and the irisThe iris contracts and expands to control the amount of light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to ciliary body (60-70 water 6 fat protein) The lens is colored in slightly yellow The lens absorbs approximatively 8 of the visiblelight (infrared and ultraviolet light are absorbed by proteins in lens)

The innermost membrane is the retina When the eye is proper focusedlight from an object outside the eye is imaged on the retinaVision is possible because of the distribution of discrete light receptors on the surface of the retina cones and rods (6-7 milion cones 75-150 milion rods)

Cones located in the central part of the retina (fovea) they are sensitive to colors vision of detail each cone is link to its own nervecone vision = photopic or bright-light vision

Fovea = the place where the image of the object of interest falls on

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Rods distributed over al the retina surface several rods are contected toa single nerve not specialized in detail vision serve to give a general overall picture of the filed of viewnot involved in color visionsensitive to low level of illumination

Blind spot region without receptors

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 4: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

bull Image Processing Toolbox (httpwwwmathworkscomproductsimage)

bull C Solomon T Breckon Fundamentals of Digital Image Processing A Practical Approach with Examples in Matlab Wiley-Blackwell 2011

bullWK Pratt Digital Image Processing Wiley-Interscience 2007

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Meet LenaThe First Lady of the Internet

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lenna Soderberg (Sjoumloumlblom) and Jeff Seideman taken in May 1997Imaging Science amp Technology Conference

Digital Image ProcessingDigital Image Processing

Week 1Week 1

What is Digital Image Processing

f(xy) = intensity gray level of the image at spatial point (xy)

x y f(xy) ndash finite discrete quantities -gt digital image

Digital Image Processing = processing digital images by means of a digital computer

A digital image is composed of a finite number of elements (location value of intensity)

These elements are called picture elements image elements pels pixels

( )i j ijx y f

3 f D

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image processing is not limited to the visual band of the electromagnetic (EM) spectrum

Image processing gamma to radio waves ultrasound electron microscopy computer-generated images

image processing image analysis computer vision

Image processing = discipline in which both the input and the output of a process are images

Computer Vision = use computer to emulate human vision (AI) ndashlearning making inferences and take actions based on visual inputs

Image analysis (image understanding) = segmentation partitioning images into regions or objects

(link between image processing and image analysis)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Distinction between image processing image analysis computer vision

low-level mid-level high-level processes

Low-level processes image preprocessing to reduce noise contrast enhancement image sharpening both input and output are images

Mid-level processes segmentation partitioning images into regions or objects description of the objects for computer processing classificationrecognition of individual objects inputs are generally images outputs are attributes extracted from the input image (eg edges contours identity of individual objects)

High-level processes ldquomaking senserdquo of a set of recognized objects performing the cognitive functions associated with vision

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing (Gonzalez + Woods) =

processes whose inputs and outputs are images +

processes that extract attributes from images recognition of individual objects

(low- and mid-level processes)

Example

automated analysis of text = acquiring an image containing text preprocessing the image (enhancement sharpening) extracting (segmenting) the individual characaters describing the characters in a form suitable for computer processing recognition of individual characters

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The Origins of DIP

Newspaper industry pictures were sent by submarine cable between London and New York

Before Bartlane cable picture transmission system (early 1920s) ndash 1 week

With Bartlane system less than 3 hours

Specialized printing equipment coded pictures for cable transmission and reconstructed them at the receiving end (1920s -5 distict levels of gray 1929 ndash 15 levels)

This example is not DIP the computer is not involved

DIP is linked and devolps in the same rhythm as digital computers (data storage display and transmission)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

A digital pictureproduced in 1921from a coded tapeby a telegraph printer withspecial type faces(McFarlane)

A digital picturemade in 1922from a tapepunched after the signals hadcrossed the Atlantic twice(McFarlane)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

1964 Jet Propulsion Laboratory (Pasadena California) processed pictures of the moon transmitted by Ranger 7 (corrected image distortions)

The first picture of the moon by a US spacecraft Ranger 7 took this image July 31 1964 about 17 minutes before impacting the lunar surface(Courtesy of NASA)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

1960-1970 ndash image processing techniques were used in medical image remote Earth resources observations astronomy

1970s ndash invention of CAT (computerized axial tomography)

CAT is a process in which a ring of detectors encircles an object (patient) and a X-ray source concentric with the detector ring rotates about the object The X-ray passes through the patient and the resulted waves are collected at the opposite end by the detectors As the source rotates the procedure is repeated Tomography consists of algorithms that use the sense data to construct an image that represent a ldquoslicerdquo through the object Motion of the object in a direction perpendicular to the ring of detectors produces a set of ldquoslicesrdquo which can be assembled in a 3D information of the inside of the object

Digital Image ProcessingDigital Image Processing

Week 1Week 1

loz geographers use DIP to study pollution patterns from aerial and satellite imagery

loz archeology ndash DIP allowed restoring blurred pictures that recorded rare artifacts lost or damaged after being photographed

loz physics ndash enhance images of experiments (high-energy plasmas electron microscopy)

loz astronomy biology nuclear medicine law enforcement industry

DIP ndash used in solving problems dealing with machine perception ndashextracting from an image information suitable for computer processing (statistical moments Fourier transform coefficients hellip)

loz automatic character recognition industrial machine vision for product assembly and inspection military recognizance automatic processing of fingerprints machine processing of aerial and satellite imagery for weather prediction Internet

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of Fields that Use DIP

Images can be classified according to their sources (visual X-ray hellip)

Energy sources for images electromagnetic energy spectrum acoustic ultrasonic electronic computer- generated

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Electromagnetic waves can be thought as propagating sinusoidal

waves of different wavelength or as a stream of massless particles

each moving in a wavelike pattern with the speed of light Each

massless particle contains a certain amount (bundle) of energy Each

bundle of energy is called a photon If spectral bands are grouped

according to energy per photon we obtain the spectrum shown in the

image above ranging from gamma-rays (highest energy) to radio

waves (lowest energy)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gamma-Ray Imaging

Nuclear medicine astronomical observations

Nuclear medicine

the approach is to inject a patient with a radioactive isotope that emits gamma rays as it decays

Images are produced from the emissions collected by gamma-ray detectors

Images of this sort are used to locate sites of bone pathology (infections tumors)

PET (positron emision tomography) ndash the patient is given a radioactive isotope that emits positrons as it decays

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of gamma-ray imaging

Bone scan PET image

Digital Image ProcessingDigital Image Processing

Week 1Week 1

X-ray imaging

Medical diagnosticindustry astronomy

A X-ray tube is a vacuum tube with a cathode and an anode The cathode is heated causing free electrons to be released The electrons flows at high speed to the positively charged anode When the electrons strike a nucleus energy is released in the form of a X-ray radiation The energy (penetreting power) of the X-rays is controlled by a voltage applied across the anode and by a curent applied to the filament in the cathode

The intensity of the X-rays is modified by absorbtion as they pass through the patient and the resulting energy falling develops it much in the same way that light develops photographic film

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Angiography = contrast-enhancement radiography

Angiograms = images of blood vessels

A catheter is inserted into an artery or vein in the groin The catheter is threaded into the blood vessel and guided to the area to be studied When it reaches the area to be studied a X-ray contrast medium is injected through the catheter This enhances contrast of the blood vessels and enables radiologist to see anyirregularities or blockages

X-rays are used in CAT (computerized axial tomography)

X-rays used in industrial processes (examine circuit boards for flows in manifacturing)

Industrial CAT scans are useful when the parts can be penetreted by X-rays

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of X-ray imaging

Chest X-rayAortic angiogram

Head CT Cygnus LoopCircuit boards

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Ultraviolet Band

Litography industrial inspection microscopy biological imaging astronomical observations

Ultraviolet light is used in fluorescence microscopy

Ultraviolet light is not visible to human eye but when a photon of ultraviolet radiation collides with an electron in an atom of a fluorescent material it elevates the electron to a higher energy level After that the electron relaxes to a lower level and emits light in the form of a lower-energy photon in the visible (red) light regionFluorescence = emission of light by a substance that has absorbed light or

other electromagentic radiation of a different wavelengthFlourescence microscope = uses an excitation light to irradiate a prepared specimen

and then it separates the much weaker radiating fluorescent light from the brighter excitation light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Visible and Infrared Bands

Light microscopy astronomy remote sensing industry law enforcement

LANDSAT satellite ndash obtained and transmitted images of the Earth from space for purpose of monitoring environmental conditions on the planet

Weather observations and prediction produce major applications of multispectral image from satellites

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Table 1 ndash Thematic bands in NASArsquos LANDSAT satellite

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Satellite images of Washington DC area in spectral bands of the Table 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of light microscopy

Taxol (anticancer agent)magnified 250X

Cholesterol(40X)

Microprocessor(60X)

Nickel oxidethin film(600X)

Surface of audio CD(1750X)

Organicsuperconductor(450X)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Automated visual inspection of manufactured goods

a bc de f

a ndash a circuit board controllerb ndash packaged pillesc ndash bottlesd ndash air bubbles in clear-plastic producte ndash cerealf ndash image of intraocular implant

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Microwave Band

The dominant aplication of imaging in the microwave band ndash radar

bull radar has the ability to collect data over virtually any region at any time regardless of weather or ambient light conditions

bull some radar waves can penetrate clouds under certain conditions canpenetrate vegetation ice dry sand

bull sometimes radar is the only way to explore inaccessible regions of theEarthrsquos surface

An imaging radar works like a flash camera it provides its own illumination(microwave pulses) to light an area on the ground and take a snapshot image Instead of a camera lens a radar uses an antenna and a digital device torecord the images In a radar image one can see only the microwave energythat was reflected back toward the radar antenna

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Spaceborne radar image of mountains in southeast Tibet

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Radio Band

medicine astronomy

MRI = Magnetic Resonance Imaging

This technique places the patient in a powerful magnet and passes short pulses of radio waves through his or her body

Each pulse causes a responding pulse of radio waves to be emited by the patinet tissues

The location from which these signals originate and their strength are determined by a computer which produces a 2D picture of a section of the patient

Digital Image ProcessingDigital Image Processing

Week 1Week 1

MRI images of a human knee (left) and spine (right)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Images of the Crab Pulsar covering the electromagnetic spectrum

Gamma X-ray Optical Infrared Radio

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Other Imaging Modalities

acoustic imaging electron microscopy synthetic (computer-generated) imaging

Imaging using sound geological explorations industry medicine

Mineral and oil exploration

For image acquisition over land one of the main approaches is to use a large truck an a large flat steel plate The plate is pressed on the ground by the truck and the truck is vibratedthrough a frequency spectrum up to 100 Hz The strength and the speed of the returning sound waves are determined by the composition of the Earth below the surface These are analysed by a computer and images are generated from the resulting analysis

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - iris

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - fingerprint

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Face detection and recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gender identification

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image morphing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Fundamental Steps in DIP

methods whose input and output are images

methods whose inputs are images but whose outputs are attributes extracted from those images

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are images

bull image acquisition

bull image filtering and enhancement

bull image restoration

bull color image processing

bull wavelets and multiresolution processing

bull compression

bull morphological processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are attributes

bull morphological processing

bull segmentation

bull representation and description

bull object recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image acquisition - may involve preprocessing such as scaling

Image enhancement

bull manipulating an image so that the result is more suitable than the original for a specific operation

bull enhancement is problem oriented

bull there is no general sbquotheoryrsquo of image enhancement

bull enhancement use subjective methods for image emprovement

bull enhancement is based on human subjective preferences regarding what is a bdquogoodrdquo enhancement result

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image restoration

bull improving the appearance of an image

bull restoration is objective - the techniques for restoration are based on mathematical or probabilistic models of image degradation

Color image processing

bull fundamental concept in color models

bull basic color processing in a digital domain

Wavelets and multiresolution processing

representing images in various degree of resolution

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Compression

reducing the storage required to save an image or the bandwidth required to transmit it

Morphological processing

bull tools for extracting image components that are useful in the representation and description of shape

bull a transition from processes that output images to processes that outputimage attributes

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Segmentation

bull partitioning an image into its constituents parts or objects

bull autonomous segmentation is one of the most difficult tasks of DIP

bull the more accurate the segmentation the more likley recognition is to succeed

Representation and description (almost always follows segmentation)

bull segmentation produces either the boundary of a region or all the poits in the region itself

bull converting the data produced by segmentation to a form suitable for computer processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

bull boundary representation the focus is on external shape characteristics such as corners or inflections

bull complete region the focus is on internal properties such as texture or skeletal shape

bull description is also called feature extraction ndash extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another

Object recognition

the process of assigning a label (eg bdquovehiclerdquo) to an object based on itsdescriptors

Knowledge database

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Simplified diagramof a cross sectionof the human eye

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Three membranes enclose the eye the cornea and sclera outer cover the choroid the retina

The cornea is a tough transparent tissue that covers the anterior surface of the eye

Continuous with the cornea the sclera is an opaque membrane that enclosesthe remainder of the optic globe

The choroid lies directly below the sclera This membrane contains a network of blood vessels (major nutrition of the eye) The choroid is pigmented and help reduce the amount of light entering the eyeThe choroid is divided (at its anterior extreme) into the ciliary body and the irisThe iris contracts and expands to control the amount of light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to ciliary body (60-70 water 6 fat protein) The lens is colored in slightly yellow The lens absorbs approximatively 8 of the visiblelight (infrared and ultraviolet light are absorbed by proteins in lens)

The innermost membrane is the retina When the eye is proper focusedlight from an object outside the eye is imaged on the retinaVision is possible because of the distribution of discrete light receptors on the surface of the retina cones and rods (6-7 milion cones 75-150 milion rods)

Cones located in the central part of the retina (fovea) they are sensitive to colors vision of detail each cone is link to its own nervecone vision = photopic or bright-light vision

Fovea = the place where the image of the object of interest falls on

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Rods distributed over al the retina surface several rods are contected toa single nerve not specialized in detail vision serve to give a general overall picture of the filed of viewnot involved in color visionsensitive to low level of illumination

Blind spot region without receptors

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 5: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Meet LenaThe First Lady of the Internet

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lenna Soderberg (Sjoumloumlblom) and Jeff Seideman taken in May 1997Imaging Science amp Technology Conference

Digital Image ProcessingDigital Image Processing

Week 1Week 1

What is Digital Image Processing

f(xy) = intensity gray level of the image at spatial point (xy)

x y f(xy) ndash finite discrete quantities -gt digital image

Digital Image Processing = processing digital images by means of a digital computer

A digital image is composed of a finite number of elements (location value of intensity)

These elements are called picture elements image elements pels pixels

( )i j ijx y f

3 f D

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image processing is not limited to the visual band of the electromagnetic (EM) spectrum

Image processing gamma to radio waves ultrasound electron microscopy computer-generated images

image processing image analysis computer vision

Image processing = discipline in which both the input and the output of a process are images

Computer Vision = use computer to emulate human vision (AI) ndashlearning making inferences and take actions based on visual inputs

Image analysis (image understanding) = segmentation partitioning images into regions or objects

(link between image processing and image analysis)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Distinction between image processing image analysis computer vision

low-level mid-level high-level processes

Low-level processes image preprocessing to reduce noise contrast enhancement image sharpening both input and output are images

Mid-level processes segmentation partitioning images into regions or objects description of the objects for computer processing classificationrecognition of individual objects inputs are generally images outputs are attributes extracted from the input image (eg edges contours identity of individual objects)

High-level processes ldquomaking senserdquo of a set of recognized objects performing the cognitive functions associated with vision

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing (Gonzalez + Woods) =

processes whose inputs and outputs are images +

processes that extract attributes from images recognition of individual objects

(low- and mid-level processes)

Example

automated analysis of text = acquiring an image containing text preprocessing the image (enhancement sharpening) extracting (segmenting) the individual characaters describing the characters in a form suitable for computer processing recognition of individual characters

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The Origins of DIP

Newspaper industry pictures were sent by submarine cable between London and New York

Before Bartlane cable picture transmission system (early 1920s) ndash 1 week

With Bartlane system less than 3 hours

Specialized printing equipment coded pictures for cable transmission and reconstructed them at the receiving end (1920s -5 distict levels of gray 1929 ndash 15 levels)

This example is not DIP the computer is not involved

DIP is linked and devolps in the same rhythm as digital computers (data storage display and transmission)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

A digital pictureproduced in 1921from a coded tapeby a telegraph printer withspecial type faces(McFarlane)

A digital picturemade in 1922from a tapepunched after the signals hadcrossed the Atlantic twice(McFarlane)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

1964 Jet Propulsion Laboratory (Pasadena California) processed pictures of the moon transmitted by Ranger 7 (corrected image distortions)

The first picture of the moon by a US spacecraft Ranger 7 took this image July 31 1964 about 17 minutes before impacting the lunar surface(Courtesy of NASA)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

1960-1970 ndash image processing techniques were used in medical image remote Earth resources observations astronomy

1970s ndash invention of CAT (computerized axial tomography)

CAT is a process in which a ring of detectors encircles an object (patient) and a X-ray source concentric with the detector ring rotates about the object The X-ray passes through the patient and the resulted waves are collected at the opposite end by the detectors As the source rotates the procedure is repeated Tomography consists of algorithms that use the sense data to construct an image that represent a ldquoslicerdquo through the object Motion of the object in a direction perpendicular to the ring of detectors produces a set of ldquoslicesrdquo which can be assembled in a 3D information of the inside of the object

Digital Image ProcessingDigital Image Processing

Week 1Week 1

loz geographers use DIP to study pollution patterns from aerial and satellite imagery

loz archeology ndash DIP allowed restoring blurred pictures that recorded rare artifacts lost or damaged after being photographed

loz physics ndash enhance images of experiments (high-energy plasmas electron microscopy)

loz astronomy biology nuclear medicine law enforcement industry

DIP ndash used in solving problems dealing with machine perception ndashextracting from an image information suitable for computer processing (statistical moments Fourier transform coefficients hellip)

loz automatic character recognition industrial machine vision for product assembly and inspection military recognizance automatic processing of fingerprints machine processing of aerial and satellite imagery for weather prediction Internet

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of Fields that Use DIP

Images can be classified according to their sources (visual X-ray hellip)

Energy sources for images electromagnetic energy spectrum acoustic ultrasonic electronic computer- generated

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Electromagnetic waves can be thought as propagating sinusoidal

waves of different wavelength or as a stream of massless particles

each moving in a wavelike pattern with the speed of light Each

massless particle contains a certain amount (bundle) of energy Each

bundle of energy is called a photon If spectral bands are grouped

according to energy per photon we obtain the spectrum shown in the

image above ranging from gamma-rays (highest energy) to radio

waves (lowest energy)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gamma-Ray Imaging

Nuclear medicine astronomical observations

Nuclear medicine

the approach is to inject a patient with a radioactive isotope that emits gamma rays as it decays

Images are produced from the emissions collected by gamma-ray detectors

Images of this sort are used to locate sites of bone pathology (infections tumors)

PET (positron emision tomography) ndash the patient is given a radioactive isotope that emits positrons as it decays

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of gamma-ray imaging

Bone scan PET image

Digital Image ProcessingDigital Image Processing

Week 1Week 1

X-ray imaging

Medical diagnosticindustry astronomy

A X-ray tube is a vacuum tube with a cathode and an anode The cathode is heated causing free electrons to be released The electrons flows at high speed to the positively charged anode When the electrons strike a nucleus energy is released in the form of a X-ray radiation The energy (penetreting power) of the X-rays is controlled by a voltage applied across the anode and by a curent applied to the filament in the cathode

The intensity of the X-rays is modified by absorbtion as they pass through the patient and the resulting energy falling develops it much in the same way that light develops photographic film

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Angiography = contrast-enhancement radiography

Angiograms = images of blood vessels

A catheter is inserted into an artery or vein in the groin The catheter is threaded into the blood vessel and guided to the area to be studied When it reaches the area to be studied a X-ray contrast medium is injected through the catheter This enhances contrast of the blood vessels and enables radiologist to see anyirregularities or blockages

X-rays are used in CAT (computerized axial tomography)

X-rays used in industrial processes (examine circuit boards for flows in manifacturing)

Industrial CAT scans are useful when the parts can be penetreted by X-rays

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of X-ray imaging

Chest X-rayAortic angiogram

Head CT Cygnus LoopCircuit boards

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Ultraviolet Band

Litography industrial inspection microscopy biological imaging astronomical observations

Ultraviolet light is used in fluorescence microscopy

Ultraviolet light is not visible to human eye but when a photon of ultraviolet radiation collides with an electron in an atom of a fluorescent material it elevates the electron to a higher energy level After that the electron relaxes to a lower level and emits light in the form of a lower-energy photon in the visible (red) light regionFluorescence = emission of light by a substance that has absorbed light or

other electromagentic radiation of a different wavelengthFlourescence microscope = uses an excitation light to irradiate a prepared specimen

and then it separates the much weaker radiating fluorescent light from the brighter excitation light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Visible and Infrared Bands

Light microscopy astronomy remote sensing industry law enforcement

LANDSAT satellite ndash obtained and transmitted images of the Earth from space for purpose of monitoring environmental conditions on the planet

Weather observations and prediction produce major applications of multispectral image from satellites

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Table 1 ndash Thematic bands in NASArsquos LANDSAT satellite

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Satellite images of Washington DC area in spectral bands of the Table 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of light microscopy

Taxol (anticancer agent)magnified 250X

Cholesterol(40X)

Microprocessor(60X)

Nickel oxidethin film(600X)

Surface of audio CD(1750X)

Organicsuperconductor(450X)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Automated visual inspection of manufactured goods

a bc de f

a ndash a circuit board controllerb ndash packaged pillesc ndash bottlesd ndash air bubbles in clear-plastic producte ndash cerealf ndash image of intraocular implant

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Microwave Band

The dominant aplication of imaging in the microwave band ndash radar

bull radar has the ability to collect data over virtually any region at any time regardless of weather or ambient light conditions

bull some radar waves can penetrate clouds under certain conditions canpenetrate vegetation ice dry sand

bull sometimes radar is the only way to explore inaccessible regions of theEarthrsquos surface

An imaging radar works like a flash camera it provides its own illumination(microwave pulses) to light an area on the ground and take a snapshot image Instead of a camera lens a radar uses an antenna and a digital device torecord the images In a radar image one can see only the microwave energythat was reflected back toward the radar antenna

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Spaceborne radar image of mountains in southeast Tibet

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Radio Band

medicine astronomy

MRI = Magnetic Resonance Imaging

This technique places the patient in a powerful magnet and passes short pulses of radio waves through his or her body

Each pulse causes a responding pulse of radio waves to be emited by the patinet tissues

The location from which these signals originate and their strength are determined by a computer which produces a 2D picture of a section of the patient

Digital Image ProcessingDigital Image Processing

Week 1Week 1

MRI images of a human knee (left) and spine (right)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Images of the Crab Pulsar covering the electromagnetic spectrum

Gamma X-ray Optical Infrared Radio

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Other Imaging Modalities

acoustic imaging electron microscopy synthetic (computer-generated) imaging

Imaging using sound geological explorations industry medicine

Mineral and oil exploration

For image acquisition over land one of the main approaches is to use a large truck an a large flat steel plate The plate is pressed on the ground by the truck and the truck is vibratedthrough a frequency spectrum up to 100 Hz The strength and the speed of the returning sound waves are determined by the composition of the Earth below the surface These are analysed by a computer and images are generated from the resulting analysis

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - iris

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - fingerprint

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Face detection and recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gender identification

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image morphing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Fundamental Steps in DIP

methods whose input and output are images

methods whose inputs are images but whose outputs are attributes extracted from those images

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are images

bull image acquisition

bull image filtering and enhancement

bull image restoration

bull color image processing

bull wavelets and multiresolution processing

bull compression

bull morphological processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are attributes

bull morphological processing

bull segmentation

bull representation and description

bull object recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image acquisition - may involve preprocessing such as scaling

Image enhancement

bull manipulating an image so that the result is more suitable than the original for a specific operation

bull enhancement is problem oriented

bull there is no general sbquotheoryrsquo of image enhancement

bull enhancement use subjective methods for image emprovement

bull enhancement is based on human subjective preferences regarding what is a bdquogoodrdquo enhancement result

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image restoration

bull improving the appearance of an image

bull restoration is objective - the techniques for restoration are based on mathematical or probabilistic models of image degradation

Color image processing

bull fundamental concept in color models

bull basic color processing in a digital domain

Wavelets and multiresolution processing

representing images in various degree of resolution

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Compression

reducing the storage required to save an image or the bandwidth required to transmit it

Morphological processing

bull tools for extracting image components that are useful in the representation and description of shape

bull a transition from processes that output images to processes that outputimage attributes

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Segmentation

bull partitioning an image into its constituents parts or objects

bull autonomous segmentation is one of the most difficult tasks of DIP

bull the more accurate the segmentation the more likley recognition is to succeed

Representation and description (almost always follows segmentation)

bull segmentation produces either the boundary of a region or all the poits in the region itself

bull converting the data produced by segmentation to a form suitable for computer processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

bull boundary representation the focus is on external shape characteristics such as corners or inflections

bull complete region the focus is on internal properties such as texture or skeletal shape

bull description is also called feature extraction ndash extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another

Object recognition

the process of assigning a label (eg bdquovehiclerdquo) to an object based on itsdescriptors

Knowledge database

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Simplified diagramof a cross sectionof the human eye

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Three membranes enclose the eye the cornea and sclera outer cover the choroid the retina

The cornea is a tough transparent tissue that covers the anterior surface of the eye

Continuous with the cornea the sclera is an opaque membrane that enclosesthe remainder of the optic globe

The choroid lies directly below the sclera This membrane contains a network of blood vessels (major nutrition of the eye) The choroid is pigmented and help reduce the amount of light entering the eyeThe choroid is divided (at its anterior extreme) into the ciliary body and the irisThe iris contracts and expands to control the amount of light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to ciliary body (60-70 water 6 fat protein) The lens is colored in slightly yellow The lens absorbs approximatively 8 of the visiblelight (infrared and ultraviolet light are absorbed by proteins in lens)

The innermost membrane is the retina When the eye is proper focusedlight from an object outside the eye is imaged on the retinaVision is possible because of the distribution of discrete light receptors on the surface of the retina cones and rods (6-7 milion cones 75-150 milion rods)

Cones located in the central part of the retina (fovea) they are sensitive to colors vision of detail each cone is link to its own nervecone vision = photopic or bright-light vision

Fovea = the place where the image of the object of interest falls on

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Rods distributed over al the retina surface several rods are contected toa single nerve not specialized in detail vision serve to give a general overall picture of the filed of viewnot involved in color visionsensitive to low level of illumination

Blind spot region without receptors

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 6: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Meet LenaThe First Lady of the Internet

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lenna Soderberg (Sjoumloumlblom) and Jeff Seideman taken in May 1997Imaging Science amp Technology Conference

Digital Image ProcessingDigital Image Processing

Week 1Week 1

What is Digital Image Processing

f(xy) = intensity gray level of the image at spatial point (xy)

x y f(xy) ndash finite discrete quantities -gt digital image

Digital Image Processing = processing digital images by means of a digital computer

A digital image is composed of a finite number of elements (location value of intensity)

These elements are called picture elements image elements pels pixels

( )i j ijx y f

3 f D

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image processing is not limited to the visual band of the electromagnetic (EM) spectrum

Image processing gamma to radio waves ultrasound electron microscopy computer-generated images

image processing image analysis computer vision

Image processing = discipline in which both the input and the output of a process are images

Computer Vision = use computer to emulate human vision (AI) ndashlearning making inferences and take actions based on visual inputs

Image analysis (image understanding) = segmentation partitioning images into regions or objects

(link between image processing and image analysis)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Distinction between image processing image analysis computer vision

low-level mid-level high-level processes

Low-level processes image preprocessing to reduce noise contrast enhancement image sharpening both input and output are images

Mid-level processes segmentation partitioning images into regions or objects description of the objects for computer processing classificationrecognition of individual objects inputs are generally images outputs are attributes extracted from the input image (eg edges contours identity of individual objects)

High-level processes ldquomaking senserdquo of a set of recognized objects performing the cognitive functions associated with vision

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing (Gonzalez + Woods) =

processes whose inputs and outputs are images +

processes that extract attributes from images recognition of individual objects

(low- and mid-level processes)

Example

automated analysis of text = acquiring an image containing text preprocessing the image (enhancement sharpening) extracting (segmenting) the individual characaters describing the characters in a form suitable for computer processing recognition of individual characters

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The Origins of DIP

Newspaper industry pictures were sent by submarine cable between London and New York

Before Bartlane cable picture transmission system (early 1920s) ndash 1 week

With Bartlane system less than 3 hours

Specialized printing equipment coded pictures for cable transmission and reconstructed them at the receiving end (1920s -5 distict levels of gray 1929 ndash 15 levels)

This example is not DIP the computer is not involved

DIP is linked and devolps in the same rhythm as digital computers (data storage display and transmission)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

A digital pictureproduced in 1921from a coded tapeby a telegraph printer withspecial type faces(McFarlane)

A digital picturemade in 1922from a tapepunched after the signals hadcrossed the Atlantic twice(McFarlane)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

1964 Jet Propulsion Laboratory (Pasadena California) processed pictures of the moon transmitted by Ranger 7 (corrected image distortions)

The first picture of the moon by a US spacecraft Ranger 7 took this image July 31 1964 about 17 minutes before impacting the lunar surface(Courtesy of NASA)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

1960-1970 ndash image processing techniques were used in medical image remote Earth resources observations astronomy

1970s ndash invention of CAT (computerized axial tomography)

CAT is a process in which a ring of detectors encircles an object (patient) and a X-ray source concentric with the detector ring rotates about the object The X-ray passes through the patient and the resulted waves are collected at the opposite end by the detectors As the source rotates the procedure is repeated Tomography consists of algorithms that use the sense data to construct an image that represent a ldquoslicerdquo through the object Motion of the object in a direction perpendicular to the ring of detectors produces a set of ldquoslicesrdquo which can be assembled in a 3D information of the inside of the object

Digital Image ProcessingDigital Image Processing

Week 1Week 1

loz geographers use DIP to study pollution patterns from aerial and satellite imagery

loz archeology ndash DIP allowed restoring blurred pictures that recorded rare artifacts lost or damaged after being photographed

loz physics ndash enhance images of experiments (high-energy plasmas electron microscopy)

loz astronomy biology nuclear medicine law enforcement industry

DIP ndash used in solving problems dealing with machine perception ndashextracting from an image information suitable for computer processing (statistical moments Fourier transform coefficients hellip)

loz automatic character recognition industrial machine vision for product assembly and inspection military recognizance automatic processing of fingerprints machine processing of aerial and satellite imagery for weather prediction Internet

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of Fields that Use DIP

Images can be classified according to their sources (visual X-ray hellip)

Energy sources for images electromagnetic energy spectrum acoustic ultrasonic electronic computer- generated

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Electromagnetic waves can be thought as propagating sinusoidal

waves of different wavelength or as a stream of massless particles

each moving in a wavelike pattern with the speed of light Each

massless particle contains a certain amount (bundle) of energy Each

bundle of energy is called a photon If spectral bands are grouped

according to energy per photon we obtain the spectrum shown in the

image above ranging from gamma-rays (highest energy) to radio

waves (lowest energy)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gamma-Ray Imaging

Nuclear medicine astronomical observations

Nuclear medicine

the approach is to inject a patient with a radioactive isotope that emits gamma rays as it decays

Images are produced from the emissions collected by gamma-ray detectors

Images of this sort are used to locate sites of bone pathology (infections tumors)

PET (positron emision tomography) ndash the patient is given a radioactive isotope that emits positrons as it decays

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of gamma-ray imaging

Bone scan PET image

Digital Image ProcessingDigital Image Processing

Week 1Week 1

X-ray imaging

Medical diagnosticindustry astronomy

A X-ray tube is a vacuum tube with a cathode and an anode The cathode is heated causing free electrons to be released The electrons flows at high speed to the positively charged anode When the electrons strike a nucleus energy is released in the form of a X-ray radiation The energy (penetreting power) of the X-rays is controlled by a voltage applied across the anode and by a curent applied to the filament in the cathode

The intensity of the X-rays is modified by absorbtion as they pass through the patient and the resulting energy falling develops it much in the same way that light develops photographic film

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Angiography = contrast-enhancement radiography

Angiograms = images of blood vessels

A catheter is inserted into an artery or vein in the groin The catheter is threaded into the blood vessel and guided to the area to be studied When it reaches the area to be studied a X-ray contrast medium is injected through the catheter This enhances contrast of the blood vessels and enables radiologist to see anyirregularities or blockages

X-rays are used in CAT (computerized axial tomography)

X-rays used in industrial processes (examine circuit boards for flows in manifacturing)

Industrial CAT scans are useful when the parts can be penetreted by X-rays

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of X-ray imaging

Chest X-rayAortic angiogram

Head CT Cygnus LoopCircuit boards

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Ultraviolet Band

Litography industrial inspection microscopy biological imaging astronomical observations

Ultraviolet light is used in fluorescence microscopy

Ultraviolet light is not visible to human eye but when a photon of ultraviolet radiation collides with an electron in an atom of a fluorescent material it elevates the electron to a higher energy level After that the electron relaxes to a lower level and emits light in the form of a lower-energy photon in the visible (red) light regionFluorescence = emission of light by a substance that has absorbed light or

other electromagentic radiation of a different wavelengthFlourescence microscope = uses an excitation light to irradiate a prepared specimen

and then it separates the much weaker radiating fluorescent light from the brighter excitation light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Visible and Infrared Bands

Light microscopy astronomy remote sensing industry law enforcement

LANDSAT satellite ndash obtained and transmitted images of the Earth from space for purpose of monitoring environmental conditions on the planet

Weather observations and prediction produce major applications of multispectral image from satellites

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Table 1 ndash Thematic bands in NASArsquos LANDSAT satellite

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Satellite images of Washington DC area in spectral bands of the Table 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of light microscopy

Taxol (anticancer agent)magnified 250X

Cholesterol(40X)

Microprocessor(60X)

Nickel oxidethin film(600X)

Surface of audio CD(1750X)

Organicsuperconductor(450X)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Automated visual inspection of manufactured goods

a bc de f

a ndash a circuit board controllerb ndash packaged pillesc ndash bottlesd ndash air bubbles in clear-plastic producte ndash cerealf ndash image of intraocular implant

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Microwave Band

The dominant aplication of imaging in the microwave band ndash radar

bull radar has the ability to collect data over virtually any region at any time regardless of weather or ambient light conditions

bull some radar waves can penetrate clouds under certain conditions canpenetrate vegetation ice dry sand

bull sometimes radar is the only way to explore inaccessible regions of theEarthrsquos surface

An imaging radar works like a flash camera it provides its own illumination(microwave pulses) to light an area on the ground and take a snapshot image Instead of a camera lens a radar uses an antenna and a digital device torecord the images In a radar image one can see only the microwave energythat was reflected back toward the radar antenna

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Spaceborne radar image of mountains in southeast Tibet

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Radio Band

medicine astronomy

MRI = Magnetic Resonance Imaging

This technique places the patient in a powerful magnet and passes short pulses of radio waves through his or her body

Each pulse causes a responding pulse of radio waves to be emited by the patinet tissues

The location from which these signals originate and their strength are determined by a computer which produces a 2D picture of a section of the patient

Digital Image ProcessingDigital Image Processing

Week 1Week 1

MRI images of a human knee (left) and spine (right)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Images of the Crab Pulsar covering the electromagnetic spectrum

Gamma X-ray Optical Infrared Radio

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Other Imaging Modalities

acoustic imaging electron microscopy synthetic (computer-generated) imaging

Imaging using sound geological explorations industry medicine

Mineral and oil exploration

For image acquisition over land one of the main approaches is to use a large truck an a large flat steel plate The plate is pressed on the ground by the truck and the truck is vibratedthrough a frequency spectrum up to 100 Hz The strength and the speed of the returning sound waves are determined by the composition of the Earth below the surface These are analysed by a computer and images are generated from the resulting analysis

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - iris

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - fingerprint

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Face detection and recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gender identification

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image morphing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Fundamental Steps in DIP

methods whose input and output are images

methods whose inputs are images but whose outputs are attributes extracted from those images

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are images

bull image acquisition

bull image filtering and enhancement

bull image restoration

bull color image processing

bull wavelets and multiresolution processing

bull compression

bull morphological processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are attributes

bull morphological processing

bull segmentation

bull representation and description

bull object recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image acquisition - may involve preprocessing such as scaling

Image enhancement

bull manipulating an image so that the result is more suitable than the original for a specific operation

bull enhancement is problem oriented

bull there is no general sbquotheoryrsquo of image enhancement

bull enhancement use subjective methods for image emprovement

bull enhancement is based on human subjective preferences regarding what is a bdquogoodrdquo enhancement result

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image restoration

bull improving the appearance of an image

bull restoration is objective - the techniques for restoration are based on mathematical or probabilistic models of image degradation

Color image processing

bull fundamental concept in color models

bull basic color processing in a digital domain

Wavelets and multiresolution processing

representing images in various degree of resolution

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Compression

reducing the storage required to save an image or the bandwidth required to transmit it

Morphological processing

bull tools for extracting image components that are useful in the representation and description of shape

bull a transition from processes that output images to processes that outputimage attributes

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Segmentation

bull partitioning an image into its constituents parts or objects

bull autonomous segmentation is one of the most difficult tasks of DIP

bull the more accurate the segmentation the more likley recognition is to succeed

Representation and description (almost always follows segmentation)

bull segmentation produces either the boundary of a region or all the poits in the region itself

bull converting the data produced by segmentation to a form suitable for computer processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

bull boundary representation the focus is on external shape characteristics such as corners or inflections

bull complete region the focus is on internal properties such as texture or skeletal shape

bull description is also called feature extraction ndash extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another

Object recognition

the process of assigning a label (eg bdquovehiclerdquo) to an object based on itsdescriptors

Knowledge database

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Simplified diagramof a cross sectionof the human eye

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Three membranes enclose the eye the cornea and sclera outer cover the choroid the retina

The cornea is a tough transparent tissue that covers the anterior surface of the eye

Continuous with the cornea the sclera is an opaque membrane that enclosesthe remainder of the optic globe

The choroid lies directly below the sclera This membrane contains a network of blood vessels (major nutrition of the eye) The choroid is pigmented and help reduce the amount of light entering the eyeThe choroid is divided (at its anterior extreme) into the ciliary body and the irisThe iris contracts and expands to control the amount of light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to ciliary body (60-70 water 6 fat protein) The lens is colored in slightly yellow The lens absorbs approximatively 8 of the visiblelight (infrared and ultraviolet light are absorbed by proteins in lens)

The innermost membrane is the retina When the eye is proper focusedlight from an object outside the eye is imaged on the retinaVision is possible because of the distribution of discrete light receptors on the surface of the retina cones and rods (6-7 milion cones 75-150 milion rods)

Cones located in the central part of the retina (fovea) they are sensitive to colors vision of detail each cone is link to its own nervecone vision = photopic or bright-light vision

Fovea = the place where the image of the object of interest falls on

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Rods distributed over al the retina surface several rods are contected toa single nerve not specialized in detail vision serve to give a general overall picture of the filed of viewnot involved in color visionsensitive to low level of illumination

Blind spot region without receptors

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 7: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lenna Soderberg (Sjoumloumlblom) and Jeff Seideman taken in May 1997Imaging Science amp Technology Conference

Digital Image ProcessingDigital Image Processing

Week 1Week 1

What is Digital Image Processing

f(xy) = intensity gray level of the image at spatial point (xy)

x y f(xy) ndash finite discrete quantities -gt digital image

Digital Image Processing = processing digital images by means of a digital computer

A digital image is composed of a finite number of elements (location value of intensity)

These elements are called picture elements image elements pels pixels

( )i j ijx y f

3 f D

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image processing is not limited to the visual band of the electromagnetic (EM) spectrum

Image processing gamma to radio waves ultrasound electron microscopy computer-generated images

image processing image analysis computer vision

Image processing = discipline in which both the input and the output of a process are images

Computer Vision = use computer to emulate human vision (AI) ndashlearning making inferences and take actions based on visual inputs

Image analysis (image understanding) = segmentation partitioning images into regions or objects

(link between image processing and image analysis)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Distinction between image processing image analysis computer vision

low-level mid-level high-level processes

Low-level processes image preprocessing to reduce noise contrast enhancement image sharpening both input and output are images

Mid-level processes segmentation partitioning images into regions or objects description of the objects for computer processing classificationrecognition of individual objects inputs are generally images outputs are attributes extracted from the input image (eg edges contours identity of individual objects)

High-level processes ldquomaking senserdquo of a set of recognized objects performing the cognitive functions associated with vision

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing (Gonzalez + Woods) =

processes whose inputs and outputs are images +

processes that extract attributes from images recognition of individual objects

(low- and mid-level processes)

Example

automated analysis of text = acquiring an image containing text preprocessing the image (enhancement sharpening) extracting (segmenting) the individual characaters describing the characters in a form suitable for computer processing recognition of individual characters

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The Origins of DIP

Newspaper industry pictures were sent by submarine cable between London and New York

Before Bartlane cable picture transmission system (early 1920s) ndash 1 week

With Bartlane system less than 3 hours

Specialized printing equipment coded pictures for cable transmission and reconstructed them at the receiving end (1920s -5 distict levels of gray 1929 ndash 15 levels)

This example is not DIP the computer is not involved

DIP is linked and devolps in the same rhythm as digital computers (data storage display and transmission)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

A digital pictureproduced in 1921from a coded tapeby a telegraph printer withspecial type faces(McFarlane)

A digital picturemade in 1922from a tapepunched after the signals hadcrossed the Atlantic twice(McFarlane)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

1964 Jet Propulsion Laboratory (Pasadena California) processed pictures of the moon transmitted by Ranger 7 (corrected image distortions)

The first picture of the moon by a US spacecraft Ranger 7 took this image July 31 1964 about 17 minutes before impacting the lunar surface(Courtesy of NASA)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

1960-1970 ndash image processing techniques were used in medical image remote Earth resources observations astronomy

1970s ndash invention of CAT (computerized axial tomography)

CAT is a process in which a ring of detectors encircles an object (patient) and a X-ray source concentric with the detector ring rotates about the object The X-ray passes through the patient and the resulted waves are collected at the opposite end by the detectors As the source rotates the procedure is repeated Tomography consists of algorithms that use the sense data to construct an image that represent a ldquoslicerdquo through the object Motion of the object in a direction perpendicular to the ring of detectors produces a set of ldquoslicesrdquo which can be assembled in a 3D information of the inside of the object

Digital Image ProcessingDigital Image Processing

Week 1Week 1

loz geographers use DIP to study pollution patterns from aerial and satellite imagery

loz archeology ndash DIP allowed restoring blurred pictures that recorded rare artifacts lost or damaged after being photographed

loz physics ndash enhance images of experiments (high-energy plasmas electron microscopy)

loz astronomy biology nuclear medicine law enforcement industry

DIP ndash used in solving problems dealing with machine perception ndashextracting from an image information suitable for computer processing (statistical moments Fourier transform coefficients hellip)

loz automatic character recognition industrial machine vision for product assembly and inspection military recognizance automatic processing of fingerprints machine processing of aerial and satellite imagery for weather prediction Internet

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of Fields that Use DIP

Images can be classified according to their sources (visual X-ray hellip)

Energy sources for images electromagnetic energy spectrum acoustic ultrasonic electronic computer- generated

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Electromagnetic waves can be thought as propagating sinusoidal

waves of different wavelength or as a stream of massless particles

each moving in a wavelike pattern with the speed of light Each

massless particle contains a certain amount (bundle) of energy Each

bundle of energy is called a photon If spectral bands are grouped

according to energy per photon we obtain the spectrum shown in the

image above ranging from gamma-rays (highest energy) to radio

waves (lowest energy)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gamma-Ray Imaging

Nuclear medicine astronomical observations

Nuclear medicine

the approach is to inject a patient with a radioactive isotope that emits gamma rays as it decays

Images are produced from the emissions collected by gamma-ray detectors

Images of this sort are used to locate sites of bone pathology (infections tumors)

PET (positron emision tomography) ndash the patient is given a radioactive isotope that emits positrons as it decays

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of gamma-ray imaging

Bone scan PET image

Digital Image ProcessingDigital Image Processing

Week 1Week 1

X-ray imaging

Medical diagnosticindustry astronomy

A X-ray tube is a vacuum tube with a cathode and an anode The cathode is heated causing free electrons to be released The electrons flows at high speed to the positively charged anode When the electrons strike a nucleus energy is released in the form of a X-ray radiation The energy (penetreting power) of the X-rays is controlled by a voltage applied across the anode and by a curent applied to the filament in the cathode

The intensity of the X-rays is modified by absorbtion as they pass through the patient and the resulting energy falling develops it much in the same way that light develops photographic film

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Angiography = contrast-enhancement radiography

Angiograms = images of blood vessels

A catheter is inserted into an artery or vein in the groin The catheter is threaded into the blood vessel and guided to the area to be studied When it reaches the area to be studied a X-ray contrast medium is injected through the catheter This enhances contrast of the blood vessels and enables radiologist to see anyirregularities or blockages

X-rays are used in CAT (computerized axial tomography)

X-rays used in industrial processes (examine circuit boards for flows in manifacturing)

Industrial CAT scans are useful when the parts can be penetreted by X-rays

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of X-ray imaging

Chest X-rayAortic angiogram

Head CT Cygnus LoopCircuit boards

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Ultraviolet Band

Litography industrial inspection microscopy biological imaging astronomical observations

Ultraviolet light is used in fluorescence microscopy

Ultraviolet light is not visible to human eye but when a photon of ultraviolet radiation collides with an electron in an atom of a fluorescent material it elevates the electron to a higher energy level After that the electron relaxes to a lower level and emits light in the form of a lower-energy photon in the visible (red) light regionFluorescence = emission of light by a substance that has absorbed light or

other electromagentic radiation of a different wavelengthFlourescence microscope = uses an excitation light to irradiate a prepared specimen

and then it separates the much weaker radiating fluorescent light from the brighter excitation light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Visible and Infrared Bands

Light microscopy astronomy remote sensing industry law enforcement

LANDSAT satellite ndash obtained and transmitted images of the Earth from space for purpose of monitoring environmental conditions on the planet

Weather observations and prediction produce major applications of multispectral image from satellites

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Table 1 ndash Thematic bands in NASArsquos LANDSAT satellite

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Satellite images of Washington DC area in spectral bands of the Table 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of light microscopy

Taxol (anticancer agent)magnified 250X

Cholesterol(40X)

Microprocessor(60X)

Nickel oxidethin film(600X)

Surface of audio CD(1750X)

Organicsuperconductor(450X)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Automated visual inspection of manufactured goods

a bc de f

a ndash a circuit board controllerb ndash packaged pillesc ndash bottlesd ndash air bubbles in clear-plastic producte ndash cerealf ndash image of intraocular implant

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Microwave Band

The dominant aplication of imaging in the microwave band ndash radar

bull radar has the ability to collect data over virtually any region at any time regardless of weather or ambient light conditions

bull some radar waves can penetrate clouds under certain conditions canpenetrate vegetation ice dry sand

bull sometimes radar is the only way to explore inaccessible regions of theEarthrsquos surface

An imaging radar works like a flash camera it provides its own illumination(microwave pulses) to light an area on the ground and take a snapshot image Instead of a camera lens a radar uses an antenna and a digital device torecord the images In a radar image one can see only the microwave energythat was reflected back toward the radar antenna

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Spaceborne radar image of mountains in southeast Tibet

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Radio Band

medicine astronomy

MRI = Magnetic Resonance Imaging

This technique places the patient in a powerful magnet and passes short pulses of radio waves through his or her body

Each pulse causes a responding pulse of radio waves to be emited by the patinet tissues

The location from which these signals originate and their strength are determined by a computer which produces a 2D picture of a section of the patient

Digital Image ProcessingDigital Image Processing

Week 1Week 1

MRI images of a human knee (left) and spine (right)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Images of the Crab Pulsar covering the electromagnetic spectrum

Gamma X-ray Optical Infrared Radio

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Other Imaging Modalities

acoustic imaging electron microscopy synthetic (computer-generated) imaging

Imaging using sound geological explorations industry medicine

Mineral and oil exploration

For image acquisition over land one of the main approaches is to use a large truck an a large flat steel plate The plate is pressed on the ground by the truck and the truck is vibratedthrough a frequency spectrum up to 100 Hz The strength and the speed of the returning sound waves are determined by the composition of the Earth below the surface These are analysed by a computer and images are generated from the resulting analysis

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - iris

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - fingerprint

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Face detection and recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gender identification

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image morphing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Fundamental Steps in DIP

methods whose input and output are images

methods whose inputs are images but whose outputs are attributes extracted from those images

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are images

bull image acquisition

bull image filtering and enhancement

bull image restoration

bull color image processing

bull wavelets and multiresolution processing

bull compression

bull morphological processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are attributes

bull morphological processing

bull segmentation

bull representation and description

bull object recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image acquisition - may involve preprocessing such as scaling

Image enhancement

bull manipulating an image so that the result is more suitable than the original for a specific operation

bull enhancement is problem oriented

bull there is no general sbquotheoryrsquo of image enhancement

bull enhancement use subjective methods for image emprovement

bull enhancement is based on human subjective preferences regarding what is a bdquogoodrdquo enhancement result

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image restoration

bull improving the appearance of an image

bull restoration is objective - the techniques for restoration are based on mathematical or probabilistic models of image degradation

Color image processing

bull fundamental concept in color models

bull basic color processing in a digital domain

Wavelets and multiresolution processing

representing images in various degree of resolution

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Compression

reducing the storage required to save an image or the bandwidth required to transmit it

Morphological processing

bull tools for extracting image components that are useful in the representation and description of shape

bull a transition from processes that output images to processes that outputimage attributes

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Segmentation

bull partitioning an image into its constituents parts or objects

bull autonomous segmentation is one of the most difficult tasks of DIP

bull the more accurate the segmentation the more likley recognition is to succeed

Representation and description (almost always follows segmentation)

bull segmentation produces either the boundary of a region or all the poits in the region itself

bull converting the data produced by segmentation to a form suitable for computer processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

bull boundary representation the focus is on external shape characteristics such as corners or inflections

bull complete region the focus is on internal properties such as texture or skeletal shape

bull description is also called feature extraction ndash extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another

Object recognition

the process of assigning a label (eg bdquovehiclerdquo) to an object based on itsdescriptors

Knowledge database

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Simplified diagramof a cross sectionof the human eye

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Three membranes enclose the eye the cornea and sclera outer cover the choroid the retina

The cornea is a tough transparent tissue that covers the anterior surface of the eye

Continuous with the cornea the sclera is an opaque membrane that enclosesthe remainder of the optic globe

The choroid lies directly below the sclera This membrane contains a network of blood vessels (major nutrition of the eye) The choroid is pigmented and help reduce the amount of light entering the eyeThe choroid is divided (at its anterior extreme) into the ciliary body and the irisThe iris contracts and expands to control the amount of light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to ciliary body (60-70 water 6 fat protein) The lens is colored in slightly yellow The lens absorbs approximatively 8 of the visiblelight (infrared and ultraviolet light are absorbed by proteins in lens)

The innermost membrane is the retina When the eye is proper focusedlight from an object outside the eye is imaged on the retinaVision is possible because of the distribution of discrete light receptors on the surface of the retina cones and rods (6-7 milion cones 75-150 milion rods)

Cones located in the central part of the retina (fovea) they are sensitive to colors vision of detail each cone is link to its own nervecone vision = photopic or bright-light vision

Fovea = the place where the image of the object of interest falls on

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Rods distributed over al the retina surface several rods are contected toa single nerve not specialized in detail vision serve to give a general overall picture of the filed of viewnot involved in color visionsensitive to low level of illumination

Blind spot region without receptors

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 8: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

What is Digital Image Processing

f(xy) = intensity gray level of the image at spatial point (xy)

x y f(xy) ndash finite discrete quantities -gt digital image

Digital Image Processing = processing digital images by means of a digital computer

A digital image is composed of a finite number of elements (location value of intensity)

These elements are called picture elements image elements pels pixels

( )i j ijx y f

3 f D

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image processing is not limited to the visual band of the electromagnetic (EM) spectrum

Image processing gamma to radio waves ultrasound electron microscopy computer-generated images

image processing image analysis computer vision

Image processing = discipline in which both the input and the output of a process are images

Computer Vision = use computer to emulate human vision (AI) ndashlearning making inferences and take actions based on visual inputs

Image analysis (image understanding) = segmentation partitioning images into regions or objects

(link between image processing and image analysis)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Distinction between image processing image analysis computer vision

low-level mid-level high-level processes

Low-level processes image preprocessing to reduce noise contrast enhancement image sharpening both input and output are images

Mid-level processes segmentation partitioning images into regions or objects description of the objects for computer processing classificationrecognition of individual objects inputs are generally images outputs are attributes extracted from the input image (eg edges contours identity of individual objects)

High-level processes ldquomaking senserdquo of a set of recognized objects performing the cognitive functions associated with vision

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing (Gonzalez + Woods) =

processes whose inputs and outputs are images +

processes that extract attributes from images recognition of individual objects

(low- and mid-level processes)

Example

automated analysis of text = acquiring an image containing text preprocessing the image (enhancement sharpening) extracting (segmenting) the individual characaters describing the characters in a form suitable for computer processing recognition of individual characters

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The Origins of DIP

Newspaper industry pictures were sent by submarine cable between London and New York

Before Bartlane cable picture transmission system (early 1920s) ndash 1 week

With Bartlane system less than 3 hours

Specialized printing equipment coded pictures for cable transmission and reconstructed them at the receiving end (1920s -5 distict levels of gray 1929 ndash 15 levels)

This example is not DIP the computer is not involved

DIP is linked and devolps in the same rhythm as digital computers (data storage display and transmission)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

A digital pictureproduced in 1921from a coded tapeby a telegraph printer withspecial type faces(McFarlane)

A digital picturemade in 1922from a tapepunched after the signals hadcrossed the Atlantic twice(McFarlane)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

1964 Jet Propulsion Laboratory (Pasadena California) processed pictures of the moon transmitted by Ranger 7 (corrected image distortions)

The first picture of the moon by a US spacecraft Ranger 7 took this image July 31 1964 about 17 minutes before impacting the lunar surface(Courtesy of NASA)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

1960-1970 ndash image processing techniques were used in medical image remote Earth resources observations astronomy

1970s ndash invention of CAT (computerized axial tomography)

CAT is a process in which a ring of detectors encircles an object (patient) and a X-ray source concentric with the detector ring rotates about the object The X-ray passes through the patient and the resulted waves are collected at the opposite end by the detectors As the source rotates the procedure is repeated Tomography consists of algorithms that use the sense data to construct an image that represent a ldquoslicerdquo through the object Motion of the object in a direction perpendicular to the ring of detectors produces a set of ldquoslicesrdquo which can be assembled in a 3D information of the inside of the object

Digital Image ProcessingDigital Image Processing

Week 1Week 1

loz geographers use DIP to study pollution patterns from aerial and satellite imagery

loz archeology ndash DIP allowed restoring blurred pictures that recorded rare artifacts lost or damaged after being photographed

loz physics ndash enhance images of experiments (high-energy plasmas electron microscopy)

loz astronomy biology nuclear medicine law enforcement industry

DIP ndash used in solving problems dealing with machine perception ndashextracting from an image information suitable for computer processing (statistical moments Fourier transform coefficients hellip)

loz automatic character recognition industrial machine vision for product assembly and inspection military recognizance automatic processing of fingerprints machine processing of aerial and satellite imagery for weather prediction Internet

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of Fields that Use DIP

Images can be classified according to their sources (visual X-ray hellip)

Energy sources for images electromagnetic energy spectrum acoustic ultrasonic electronic computer- generated

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Electromagnetic waves can be thought as propagating sinusoidal

waves of different wavelength or as a stream of massless particles

each moving in a wavelike pattern with the speed of light Each

massless particle contains a certain amount (bundle) of energy Each

bundle of energy is called a photon If spectral bands are grouped

according to energy per photon we obtain the spectrum shown in the

image above ranging from gamma-rays (highest energy) to radio

waves (lowest energy)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gamma-Ray Imaging

Nuclear medicine astronomical observations

Nuclear medicine

the approach is to inject a patient with a radioactive isotope that emits gamma rays as it decays

Images are produced from the emissions collected by gamma-ray detectors

Images of this sort are used to locate sites of bone pathology (infections tumors)

PET (positron emision tomography) ndash the patient is given a radioactive isotope that emits positrons as it decays

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of gamma-ray imaging

Bone scan PET image

Digital Image ProcessingDigital Image Processing

Week 1Week 1

X-ray imaging

Medical diagnosticindustry astronomy

A X-ray tube is a vacuum tube with a cathode and an anode The cathode is heated causing free electrons to be released The electrons flows at high speed to the positively charged anode When the electrons strike a nucleus energy is released in the form of a X-ray radiation The energy (penetreting power) of the X-rays is controlled by a voltage applied across the anode and by a curent applied to the filament in the cathode

The intensity of the X-rays is modified by absorbtion as they pass through the patient and the resulting energy falling develops it much in the same way that light develops photographic film

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Angiography = contrast-enhancement radiography

Angiograms = images of blood vessels

A catheter is inserted into an artery or vein in the groin The catheter is threaded into the blood vessel and guided to the area to be studied When it reaches the area to be studied a X-ray contrast medium is injected through the catheter This enhances contrast of the blood vessels and enables radiologist to see anyirregularities or blockages

X-rays are used in CAT (computerized axial tomography)

X-rays used in industrial processes (examine circuit boards for flows in manifacturing)

Industrial CAT scans are useful when the parts can be penetreted by X-rays

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of X-ray imaging

Chest X-rayAortic angiogram

Head CT Cygnus LoopCircuit boards

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Ultraviolet Band

Litography industrial inspection microscopy biological imaging astronomical observations

Ultraviolet light is used in fluorescence microscopy

Ultraviolet light is not visible to human eye but when a photon of ultraviolet radiation collides with an electron in an atom of a fluorescent material it elevates the electron to a higher energy level After that the electron relaxes to a lower level and emits light in the form of a lower-energy photon in the visible (red) light regionFluorescence = emission of light by a substance that has absorbed light or

other electromagentic radiation of a different wavelengthFlourescence microscope = uses an excitation light to irradiate a prepared specimen

and then it separates the much weaker radiating fluorescent light from the brighter excitation light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Visible and Infrared Bands

Light microscopy astronomy remote sensing industry law enforcement

LANDSAT satellite ndash obtained and transmitted images of the Earth from space for purpose of monitoring environmental conditions on the planet

Weather observations and prediction produce major applications of multispectral image from satellites

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Table 1 ndash Thematic bands in NASArsquos LANDSAT satellite

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Satellite images of Washington DC area in spectral bands of the Table 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of light microscopy

Taxol (anticancer agent)magnified 250X

Cholesterol(40X)

Microprocessor(60X)

Nickel oxidethin film(600X)

Surface of audio CD(1750X)

Organicsuperconductor(450X)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Automated visual inspection of manufactured goods

a bc de f

a ndash a circuit board controllerb ndash packaged pillesc ndash bottlesd ndash air bubbles in clear-plastic producte ndash cerealf ndash image of intraocular implant

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Microwave Band

The dominant aplication of imaging in the microwave band ndash radar

bull radar has the ability to collect data over virtually any region at any time regardless of weather or ambient light conditions

bull some radar waves can penetrate clouds under certain conditions canpenetrate vegetation ice dry sand

bull sometimes radar is the only way to explore inaccessible regions of theEarthrsquos surface

An imaging radar works like a flash camera it provides its own illumination(microwave pulses) to light an area on the ground and take a snapshot image Instead of a camera lens a radar uses an antenna and a digital device torecord the images In a radar image one can see only the microwave energythat was reflected back toward the radar antenna

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Spaceborne radar image of mountains in southeast Tibet

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Radio Band

medicine astronomy

MRI = Magnetic Resonance Imaging

This technique places the patient in a powerful magnet and passes short pulses of radio waves through his or her body

Each pulse causes a responding pulse of radio waves to be emited by the patinet tissues

The location from which these signals originate and their strength are determined by a computer which produces a 2D picture of a section of the patient

Digital Image ProcessingDigital Image Processing

Week 1Week 1

MRI images of a human knee (left) and spine (right)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Images of the Crab Pulsar covering the electromagnetic spectrum

Gamma X-ray Optical Infrared Radio

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Other Imaging Modalities

acoustic imaging electron microscopy synthetic (computer-generated) imaging

Imaging using sound geological explorations industry medicine

Mineral and oil exploration

For image acquisition over land one of the main approaches is to use a large truck an a large flat steel plate The plate is pressed on the ground by the truck and the truck is vibratedthrough a frequency spectrum up to 100 Hz The strength and the speed of the returning sound waves are determined by the composition of the Earth below the surface These are analysed by a computer and images are generated from the resulting analysis

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - iris

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - fingerprint

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Face detection and recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gender identification

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image morphing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Fundamental Steps in DIP

methods whose input and output are images

methods whose inputs are images but whose outputs are attributes extracted from those images

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are images

bull image acquisition

bull image filtering and enhancement

bull image restoration

bull color image processing

bull wavelets and multiresolution processing

bull compression

bull morphological processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are attributes

bull morphological processing

bull segmentation

bull representation and description

bull object recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image acquisition - may involve preprocessing such as scaling

Image enhancement

bull manipulating an image so that the result is more suitable than the original for a specific operation

bull enhancement is problem oriented

bull there is no general sbquotheoryrsquo of image enhancement

bull enhancement use subjective methods for image emprovement

bull enhancement is based on human subjective preferences regarding what is a bdquogoodrdquo enhancement result

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image restoration

bull improving the appearance of an image

bull restoration is objective - the techniques for restoration are based on mathematical or probabilistic models of image degradation

Color image processing

bull fundamental concept in color models

bull basic color processing in a digital domain

Wavelets and multiresolution processing

representing images in various degree of resolution

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Compression

reducing the storage required to save an image or the bandwidth required to transmit it

Morphological processing

bull tools for extracting image components that are useful in the representation and description of shape

bull a transition from processes that output images to processes that outputimage attributes

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Segmentation

bull partitioning an image into its constituents parts or objects

bull autonomous segmentation is one of the most difficult tasks of DIP

bull the more accurate the segmentation the more likley recognition is to succeed

Representation and description (almost always follows segmentation)

bull segmentation produces either the boundary of a region or all the poits in the region itself

bull converting the data produced by segmentation to a form suitable for computer processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

bull boundary representation the focus is on external shape characteristics such as corners or inflections

bull complete region the focus is on internal properties such as texture or skeletal shape

bull description is also called feature extraction ndash extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another

Object recognition

the process of assigning a label (eg bdquovehiclerdquo) to an object based on itsdescriptors

Knowledge database

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Simplified diagramof a cross sectionof the human eye

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Three membranes enclose the eye the cornea and sclera outer cover the choroid the retina

The cornea is a tough transparent tissue that covers the anterior surface of the eye

Continuous with the cornea the sclera is an opaque membrane that enclosesthe remainder of the optic globe

The choroid lies directly below the sclera This membrane contains a network of blood vessels (major nutrition of the eye) The choroid is pigmented and help reduce the amount of light entering the eyeThe choroid is divided (at its anterior extreme) into the ciliary body and the irisThe iris contracts and expands to control the amount of light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to ciliary body (60-70 water 6 fat protein) The lens is colored in slightly yellow The lens absorbs approximatively 8 of the visiblelight (infrared and ultraviolet light are absorbed by proteins in lens)

The innermost membrane is the retina When the eye is proper focusedlight from an object outside the eye is imaged on the retinaVision is possible because of the distribution of discrete light receptors on the surface of the retina cones and rods (6-7 milion cones 75-150 milion rods)

Cones located in the central part of the retina (fovea) they are sensitive to colors vision of detail each cone is link to its own nervecone vision = photopic or bright-light vision

Fovea = the place where the image of the object of interest falls on

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Rods distributed over al the retina surface several rods are contected toa single nerve not specialized in detail vision serve to give a general overall picture of the filed of viewnot involved in color visionsensitive to low level of illumination

Blind spot region without receptors

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 9: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image processing is not limited to the visual band of the electromagnetic (EM) spectrum

Image processing gamma to radio waves ultrasound electron microscopy computer-generated images

image processing image analysis computer vision

Image processing = discipline in which both the input and the output of a process are images

Computer Vision = use computer to emulate human vision (AI) ndashlearning making inferences and take actions based on visual inputs

Image analysis (image understanding) = segmentation partitioning images into regions or objects

(link between image processing and image analysis)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Distinction between image processing image analysis computer vision

low-level mid-level high-level processes

Low-level processes image preprocessing to reduce noise contrast enhancement image sharpening both input and output are images

Mid-level processes segmentation partitioning images into regions or objects description of the objects for computer processing classificationrecognition of individual objects inputs are generally images outputs are attributes extracted from the input image (eg edges contours identity of individual objects)

High-level processes ldquomaking senserdquo of a set of recognized objects performing the cognitive functions associated with vision

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing (Gonzalez + Woods) =

processes whose inputs and outputs are images +

processes that extract attributes from images recognition of individual objects

(low- and mid-level processes)

Example

automated analysis of text = acquiring an image containing text preprocessing the image (enhancement sharpening) extracting (segmenting) the individual characaters describing the characters in a form suitable for computer processing recognition of individual characters

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The Origins of DIP

Newspaper industry pictures were sent by submarine cable between London and New York

Before Bartlane cable picture transmission system (early 1920s) ndash 1 week

With Bartlane system less than 3 hours

Specialized printing equipment coded pictures for cable transmission and reconstructed them at the receiving end (1920s -5 distict levels of gray 1929 ndash 15 levels)

This example is not DIP the computer is not involved

DIP is linked and devolps in the same rhythm as digital computers (data storage display and transmission)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

A digital pictureproduced in 1921from a coded tapeby a telegraph printer withspecial type faces(McFarlane)

A digital picturemade in 1922from a tapepunched after the signals hadcrossed the Atlantic twice(McFarlane)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

1964 Jet Propulsion Laboratory (Pasadena California) processed pictures of the moon transmitted by Ranger 7 (corrected image distortions)

The first picture of the moon by a US spacecraft Ranger 7 took this image July 31 1964 about 17 minutes before impacting the lunar surface(Courtesy of NASA)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

1960-1970 ndash image processing techniques were used in medical image remote Earth resources observations astronomy

1970s ndash invention of CAT (computerized axial tomography)

CAT is a process in which a ring of detectors encircles an object (patient) and a X-ray source concentric with the detector ring rotates about the object The X-ray passes through the patient and the resulted waves are collected at the opposite end by the detectors As the source rotates the procedure is repeated Tomography consists of algorithms that use the sense data to construct an image that represent a ldquoslicerdquo through the object Motion of the object in a direction perpendicular to the ring of detectors produces a set of ldquoslicesrdquo which can be assembled in a 3D information of the inside of the object

Digital Image ProcessingDigital Image Processing

Week 1Week 1

loz geographers use DIP to study pollution patterns from aerial and satellite imagery

loz archeology ndash DIP allowed restoring blurred pictures that recorded rare artifacts lost or damaged after being photographed

loz physics ndash enhance images of experiments (high-energy plasmas electron microscopy)

loz astronomy biology nuclear medicine law enforcement industry

DIP ndash used in solving problems dealing with machine perception ndashextracting from an image information suitable for computer processing (statistical moments Fourier transform coefficients hellip)

loz automatic character recognition industrial machine vision for product assembly and inspection military recognizance automatic processing of fingerprints machine processing of aerial and satellite imagery for weather prediction Internet

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of Fields that Use DIP

Images can be classified according to their sources (visual X-ray hellip)

Energy sources for images electromagnetic energy spectrum acoustic ultrasonic electronic computer- generated

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Electromagnetic waves can be thought as propagating sinusoidal

waves of different wavelength or as a stream of massless particles

each moving in a wavelike pattern with the speed of light Each

massless particle contains a certain amount (bundle) of energy Each

bundle of energy is called a photon If spectral bands are grouped

according to energy per photon we obtain the spectrum shown in the

image above ranging from gamma-rays (highest energy) to radio

waves (lowest energy)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gamma-Ray Imaging

Nuclear medicine astronomical observations

Nuclear medicine

the approach is to inject a patient with a radioactive isotope that emits gamma rays as it decays

Images are produced from the emissions collected by gamma-ray detectors

Images of this sort are used to locate sites of bone pathology (infections tumors)

PET (positron emision tomography) ndash the patient is given a radioactive isotope that emits positrons as it decays

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of gamma-ray imaging

Bone scan PET image

Digital Image ProcessingDigital Image Processing

Week 1Week 1

X-ray imaging

Medical diagnosticindustry astronomy

A X-ray tube is a vacuum tube with a cathode and an anode The cathode is heated causing free electrons to be released The electrons flows at high speed to the positively charged anode When the electrons strike a nucleus energy is released in the form of a X-ray radiation The energy (penetreting power) of the X-rays is controlled by a voltage applied across the anode and by a curent applied to the filament in the cathode

The intensity of the X-rays is modified by absorbtion as they pass through the patient and the resulting energy falling develops it much in the same way that light develops photographic film

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Angiography = contrast-enhancement radiography

Angiograms = images of blood vessels

A catheter is inserted into an artery or vein in the groin The catheter is threaded into the blood vessel and guided to the area to be studied When it reaches the area to be studied a X-ray contrast medium is injected through the catheter This enhances contrast of the blood vessels and enables radiologist to see anyirregularities or blockages

X-rays are used in CAT (computerized axial tomography)

X-rays used in industrial processes (examine circuit boards for flows in manifacturing)

Industrial CAT scans are useful when the parts can be penetreted by X-rays

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of X-ray imaging

Chest X-rayAortic angiogram

Head CT Cygnus LoopCircuit boards

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Ultraviolet Band

Litography industrial inspection microscopy biological imaging astronomical observations

Ultraviolet light is used in fluorescence microscopy

Ultraviolet light is not visible to human eye but when a photon of ultraviolet radiation collides with an electron in an atom of a fluorescent material it elevates the electron to a higher energy level After that the electron relaxes to a lower level and emits light in the form of a lower-energy photon in the visible (red) light regionFluorescence = emission of light by a substance that has absorbed light or

other electromagentic radiation of a different wavelengthFlourescence microscope = uses an excitation light to irradiate a prepared specimen

and then it separates the much weaker radiating fluorescent light from the brighter excitation light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Visible and Infrared Bands

Light microscopy astronomy remote sensing industry law enforcement

LANDSAT satellite ndash obtained and transmitted images of the Earth from space for purpose of monitoring environmental conditions on the planet

Weather observations and prediction produce major applications of multispectral image from satellites

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Table 1 ndash Thematic bands in NASArsquos LANDSAT satellite

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Satellite images of Washington DC area in spectral bands of the Table 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of light microscopy

Taxol (anticancer agent)magnified 250X

Cholesterol(40X)

Microprocessor(60X)

Nickel oxidethin film(600X)

Surface of audio CD(1750X)

Organicsuperconductor(450X)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Automated visual inspection of manufactured goods

a bc de f

a ndash a circuit board controllerb ndash packaged pillesc ndash bottlesd ndash air bubbles in clear-plastic producte ndash cerealf ndash image of intraocular implant

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Microwave Band

The dominant aplication of imaging in the microwave band ndash radar

bull radar has the ability to collect data over virtually any region at any time regardless of weather or ambient light conditions

bull some radar waves can penetrate clouds under certain conditions canpenetrate vegetation ice dry sand

bull sometimes radar is the only way to explore inaccessible regions of theEarthrsquos surface

An imaging radar works like a flash camera it provides its own illumination(microwave pulses) to light an area on the ground and take a snapshot image Instead of a camera lens a radar uses an antenna and a digital device torecord the images In a radar image one can see only the microwave energythat was reflected back toward the radar antenna

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Spaceborne radar image of mountains in southeast Tibet

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Radio Band

medicine astronomy

MRI = Magnetic Resonance Imaging

This technique places the patient in a powerful magnet and passes short pulses of radio waves through his or her body

Each pulse causes a responding pulse of radio waves to be emited by the patinet tissues

The location from which these signals originate and their strength are determined by a computer which produces a 2D picture of a section of the patient

Digital Image ProcessingDigital Image Processing

Week 1Week 1

MRI images of a human knee (left) and spine (right)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Images of the Crab Pulsar covering the electromagnetic spectrum

Gamma X-ray Optical Infrared Radio

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Other Imaging Modalities

acoustic imaging electron microscopy synthetic (computer-generated) imaging

Imaging using sound geological explorations industry medicine

Mineral and oil exploration

For image acquisition over land one of the main approaches is to use a large truck an a large flat steel plate The plate is pressed on the ground by the truck and the truck is vibratedthrough a frequency spectrum up to 100 Hz The strength and the speed of the returning sound waves are determined by the composition of the Earth below the surface These are analysed by a computer and images are generated from the resulting analysis

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - iris

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - fingerprint

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Face detection and recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gender identification

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image morphing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Fundamental Steps in DIP

methods whose input and output are images

methods whose inputs are images but whose outputs are attributes extracted from those images

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are images

bull image acquisition

bull image filtering and enhancement

bull image restoration

bull color image processing

bull wavelets and multiresolution processing

bull compression

bull morphological processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are attributes

bull morphological processing

bull segmentation

bull representation and description

bull object recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image acquisition - may involve preprocessing such as scaling

Image enhancement

bull manipulating an image so that the result is more suitable than the original for a specific operation

bull enhancement is problem oriented

bull there is no general sbquotheoryrsquo of image enhancement

bull enhancement use subjective methods for image emprovement

bull enhancement is based on human subjective preferences regarding what is a bdquogoodrdquo enhancement result

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image restoration

bull improving the appearance of an image

bull restoration is objective - the techniques for restoration are based on mathematical or probabilistic models of image degradation

Color image processing

bull fundamental concept in color models

bull basic color processing in a digital domain

Wavelets and multiresolution processing

representing images in various degree of resolution

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Compression

reducing the storage required to save an image or the bandwidth required to transmit it

Morphological processing

bull tools for extracting image components that are useful in the representation and description of shape

bull a transition from processes that output images to processes that outputimage attributes

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Segmentation

bull partitioning an image into its constituents parts or objects

bull autonomous segmentation is one of the most difficult tasks of DIP

bull the more accurate the segmentation the more likley recognition is to succeed

Representation and description (almost always follows segmentation)

bull segmentation produces either the boundary of a region or all the poits in the region itself

bull converting the data produced by segmentation to a form suitable for computer processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

bull boundary representation the focus is on external shape characteristics such as corners or inflections

bull complete region the focus is on internal properties such as texture or skeletal shape

bull description is also called feature extraction ndash extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another

Object recognition

the process of assigning a label (eg bdquovehiclerdquo) to an object based on itsdescriptors

Knowledge database

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Simplified diagramof a cross sectionof the human eye

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Three membranes enclose the eye the cornea and sclera outer cover the choroid the retina

The cornea is a tough transparent tissue that covers the anterior surface of the eye

Continuous with the cornea the sclera is an opaque membrane that enclosesthe remainder of the optic globe

The choroid lies directly below the sclera This membrane contains a network of blood vessels (major nutrition of the eye) The choroid is pigmented and help reduce the amount of light entering the eyeThe choroid is divided (at its anterior extreme) into the ciliary body and the irisThe iris contracts and expands to control the amount of light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to ciliary body (60-70 water 6 fat protein) The lens is colored in slightly yellow The lens absorbs approximatively 8 of the visiblelight (infrared and ultraviolet light are absorbed by proteins in lens)

The innermost membrane is the retina When the eye is proper focusedlight from an object outside the eye is imaged on the retinaVision is possible because of the distribution of discrete light receptors on the surface of the retina cones and rods (6-7 milion cones 75-150 milion rods)

Cones located in the central part of the retina (fovea) they are sensitive to colors vision of detail each cone is link to its own nervecone vision = photopic or bright-light vision

Fovea = the place where the image of the object of interest falls on

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Rods distributed over al the retina surface several rods are contected toa single nerve not specialized in detail vision serve to give a general overall picture of the filed of viewnot involved in color visionsensitive to low level of illumination

Blind spot region without receptors

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 10: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Distinction between image processing image analysis computer vision

low-level mid-level high-level processes

Low-level processes image preprocessing to reduce noise contrast enhancement image sharpening both input and output are images

Mid-level processes segmentation partitioning images into regions or objects description of the objects for computer processing classificationrecognition of individual objects inputs are generally images outputs are attributes extracted from the input image (eg edges contours identity of individual objects)

High-level processes ldquomaking senserdquo of a set of recognized objects performing the cognitive functions associated with vision

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing (Gonzalez + Woods) =

processes whose inputs and outputs are images +

processes that extract attributes from images recognition of individual objects

(low- and mid-level processes)

Example

automated analysis of text = acquiring an image containing text preprocessing the image (enhancement sharpening) extracting (segmenting) the individual characaters describing the characters in a form suitable for computer processing recognition of individual characters

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The Origins of DIP

Newspaper industry pictures were sent by submarine cable between London and New York

Before Bartlane cable picture transmission system (early 1920s) ndash 1 week

With Bartlane system less than 3 hours

Specialized printing equipment coded pictures for cable transmission and reconstructed them at the receiving end (1920s -5 distict levels of gray 1929 ndash 15 levels)

This example is not DIP the computer is not involved

DIP is linked and devolps in the same rhythm as digital computers (data storage display and transmission)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

A digital pictureproduced in 1921from a coded tapeby a telegraph printer withspecial type faces(McFarlane)

A digital picturemade in 1922from a tapepunched after the signals hadcrossed the Atlantic twice(McFarlane)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

1964 Jet Propulsion Laboratory (Pasadena California) processed pictures of the moon transmitted by Ranger 7 (corrected image distortions)

The first picture of the moon by a US spacecraft Ranger 7 took this image July 31 1964 about 17 minutes before impacting the lunar surface(Courtesy of NASA)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

1960-1970 ndash image processing techniques were used in medical image remote Earth resources observations astronomy

1970s ndash invention of CAT (computerized axial tomography)

CAT is a process in which a ring of detectors encircles an object (patient) and a X-ray source concentric with the detector ring rotates about the object The X-ray passes through the patient and the resulted waves are collected at the opposite end by the detectors As the source rotates the procedure is repeated Tomography consists of algorithms that use the sense data to construct an image that represent a ldquoslicerdquo through the object Motion of the object in a direction perpendicular to the ring of detectors produces a set of ldquoslicesrdquo which can be assembled in a 3D information of the inside of the object

Digital Image ProcessingDigital Image Processing

Week 1Week 1

loz geographers use DIP to study pollution patterns from aerial and satellite imagery

loz archeology ndash DIP allowed restoring blurred pictures that recorded rare artifacts lost or damaged after being photographed

loz physics ndash enhance images of experiments (high-energy plasmas electron microscopy)

loz astronomy biology nuclear medicine law enforcement industry

DIP ndash used in solving problems dealing with machine perception ndashextracting from an image information suitable for computer processing (statistical moments Fourier transform coefficients hellip)

loz automatic character recognition industrial machine vision for product assembly and inspection military recognizance automatic processing of fingerprints machine processing of aerial and satellite imagery for weather prediction Internet

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of Fields that Use DIP

Images can be classified according to their sources (visual X-ray hellip)

Energy sources for images electromagnetic energy spectrum acoustic ultrasonic electronic computer- generated

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Electromagnetic waves can be thought as propagating sinusoidal

waves of different wavelength or as a stream of massless particles

each moving in a wavelike pattern with the speed of light Each

massless particle contains a certain amount (bundle) of energy Each

bundle of energy is called a photon If spectral bands are grouped

according to energy per photon we obtain the spectrum shown in the

image above ranging from gamma-rays (highest energy) to radio

waves (lowest energy)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gamma-Ray Imaging

Nuclear medicine astronomical observations

Nuclear medicine

the approach is to inject a patient with a radioactive isotope that emits gamma rays as it decays

Images are produced from the emissions collected by gamma-ray detectors

Images of this sort are used to locate sites of bone pathology (infections tumors)

PET (positron emision tomography) ndash the patient is given a radioactive isotope that emits positrons as it decays

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of gamma-ray imaging

Bone scan PET image

Digital Image ProcessingDigital Image Processing

Week 1Week 1

X-ray imaging

Medical diagnosticindustry astronomy

A X-ray tube is a vacuum tube with a cathode and an anode The cathode is heated causing free electrons to be released The electrons flows at high speed to the positively charged anode When the electrons strike a nucleus energy is released in the form of a X-ray radiation The energy (penetreting power) of the X-rays is controlled by a voltage applied across the anode and by a curent applied to the filament in the cathode

The intensity of the X-rays is modified by absorbtion as they pass through the patient and the resulting energy falling develops it much in the same way that light develops photographic film

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Angiography = contrast-enhancement radiography

Angiograms = images of blood vessels

A catheter is inserted into an artery or vein in the groin The catheter is threaded into the blood vessel and guided to the area to be studied When it reaches the area to be studied a X-ray contrast medium is injected through the catheter This enhances contrast of the blood vessels and enables radiologist to see anyirregularities or blockages

X-rays are used in CAT (computerized axial tomography)

X-rays used in industrial processes (examine circuit boards for flows in manifacturing)

Industrial CAT scans are useful when the parts can be penetreted by X-rays

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of X-ray imaging

Chest X-rayAortic angiogram

Head CT Cygnus LoopCircuit boards

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Ultraviolet Band

Litography industrial inspection microscopy biological imaging astronomical observations

Ultraviolet light is used in fluorescence microscopy

Ultraviolet light is not visible to human eye but when a photon of ultraviolet radiation collides with an electron in an atom of a fluorescent material it elevates the electron to a higher energy level After that the electron relaxes to a lower level and emits light in the form of a lower-energy photon in the visible (red) light regionFluorescence = emission of light by a substance that has absorbed light or

other electromagentic radiation of a different wavelengthFlourescence microscope = uses an excitation light to irradiate a prepared specimen

and then it separates the much weaker radiating fluorescent light from the brighter excitation light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Visible and Infrared Bands

Light microscopy astronomy remote sensing industry law enforcement

LANDSAT satellite ndash obtained and transmitted images of the Earth from space for purpose of monitoring environmental conditions on the planet

Weather observations and prediction produce major applications of multispectral image from satellites

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Table 1 ndash Thematic bands in NASArsquos LANDSAT satellite

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Satellite images of Washington DC area in spectral bands of the Table 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of light microscopy

Taxol (anticancer agent)magnified 250X

Cholesterol(40X)

Microprocessor(60X)

Nickel oxidethin film(600X)

Surface of audio CD(1750X)

Organicsuperconductor(450X)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Automated visual inspection of manufactured goods

a bc de f

a ndash a circuit board controllerb ndash packaged pillesc ndash bottlesd ndash air bubbles in clear-plastic producte ndash cerealf ndash image of intraocular implant

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Microwave Band

The dominant aplication of imaging in the microwave band ndash radar

bull radar has the ability to collect data over virtually any region at any time regardless of weather or ambient light conditions

bull some radar waves can penetrate clouds under certain conditions canpenetrate vegetation ice dry sand

bull sometimes radar is the only way to explore inaccessible regions of theEarthrsquos surface

An imaging radar works like a flash camera it provides its own illumination(microwave pulses) to light an area on the ground and take a snapshot image Instead of a camera lens a radar uses an antenna and a digital device torecord the images In a radar image one can see only the microwave energythat was reflected back toward the radar antenna

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Spaceborne radar image of mountains in southeast Tibet

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Radio Band

medicine astronomy

MRI = Magnetic Resonance Imaging

This technique places the patient in a powerful magnet and passes short pulses of radio waves through his or her body

Each pulse causes a responding pulse of radio waves to be emited by the patinet tissues

The location from which these signals originate and their strength are determined by a computer which produces a 2D picture of a section of the patient

Digital Image ProcessingDigital Image Processing

Week 1Week 1

MRI images of a human knee (left) and spine (right)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Images of the Crab Pulsar covering the electromagnetic spectrum

Gamma X-ray Optical Infrared Radio

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Other Imaging Modalities

acoustic imaging electron microscopy synthetic (computer-generated) imaging

Imaging using sound geological explorations industry medicine

Mineral and oil exploration

For image acquisition over land one of the main approaches is to use a large truck an a large flat steel plate The plate is pressed on the ground by the truck and the truck is vibratedthrough a frequency spectrum up to 100 Hz The strength and the speed of the returning sound waves are determined by the composition of the Earth below the surface These are analysed by a computer and images are generated from the resulting analysis

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - iris

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - fingerprint

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Face detection and recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gender identification

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image morphing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Fundamental Steps in DIP

methods whose input and output are images

methods whose inputs are images but whose outputs are attributes extracted from those images

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are images

bull image acquisition

bull image filtering and enhancement

bull image restoration

bull color image processing

bull wavelets and multiresolution processing

bull compression

bull morphological processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are attributes

bull morphological processing

bull segmentation

bull representation and description

bull object recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image acquisition - may involve preprocessing such as scaling

Image enhancement

bull manipulating an image so that the result is more suitable than the original for a specific operation

bull enhancement is problem oriented

bull there is no general sbquotheoryrsquo of image enhancement

bull enhancement use subjective methods for image emprovement

bull enhancement is based on human subjective preferences regarding what is a bdquogoodrdquo enhancement result

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image restoration

bull improving the appearance of an image

bull restoration is objective - the techniques for restoration are based on mathematical or probabilistic models of image degradation

Color image processing

bull fundamental concept in color models

bull basic color processing in a digital domain

Wavelets and multiresolution processing

representing images in various degree of resolution

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Compression

reducing the storage required to save an image or the bandwidth required to transmit it

Morphological processing

bull tools for extracting image components that are useful in the representation and description of shape

bull a transition from processes that output images to processes that outputimage attributes

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Segmentation

bull partitioning an image into its constituents parts or objects

bull autonomous segmentation is one of the most difficult tasks of DIP

bull the more accurate the segmentation the more likley recognition is to succeed

Representation and description (almost always follows segmentation)

bull segmentation produces either the boundary of a region or all the poits in the region itself

bull converting the data produced by segmentation to a form suitable for computer processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

bull boundary representation the focus is on external shape characteristics such as corners or inflections

bull complete region the focus is on internal properties such as texture or skeletal shape

bull description is also called feature extraction ndash extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another

Object recognition

the process of assigning a label (eg bdquovehiclerdquo) to an object based on itsdescriptors

Knowledge database

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Simplified diagramof a cross sectionof the human eye

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Three membranes enclose the eye the cornea and sclera outer cover the choroid the retina

The cornea is a tough transparent tissue that covers the anterior surface of the eye

Continuous with the cornea the sclera is an opaque membrane that enclosesthe remainder of the optic globe

The choroid lies directly below the sclera This membrane contains a network of blood vessels (major nutrition of the eye) The choroid is pigmented and help reduce the amount of light entering the eyeThe choroid is divided (at its anterior extreme) into the ciliary body and the irisThe iris contracts and expands to control the amount of light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to ciliary body (60-70 water 6 fat protein) The lens is colored in slightly yellow The lens absorbs approximatively 8 of the visiblelight (infrared and ultraviolet light are absorbed by proteins in lens)

The innermost membrane is the retina When the eye is proper focusedlight from an object outside the eye is imaged on the retinaVision is possible because of the distribution of discrete light receptors on the surface of the retina cones and rods (6-7 milion cones 75-150 milion rods)

Cones located in the central part of the retina (fovea) they are sensitive to colors vision of detail each cone is link to its own nervecone vision = photopic or bright-light vision

Fovea = the place where the image of the object of interest falls on

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Rods distributed over al the retina surface several rods are contected toa single nerve not specialized in detail vision serve to give a general overall picture of the filed of viewnot involved in color visionsensitive to low level of illumination

Blind spot region without receptors

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 11: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing (Gonzalez + Woods) =

processes whose inputs and outputs are images +

processes that extract attributes from images recognition of individual objects

(low- and mid-level processes)

Example

automated analysis of text = acquiring an image containing text preprocessing the image (enhancement sharpening) extracting (segmenting) the individual characaters describing the characters in a form suitable for computer processing recognition of individual characters

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The Origins of DIP

Newspaper industry pictures were sent by submarine cable between London and New York

Before Bartlane cable picture transmission system (early 1920s) ndash 1 week

With Bartlane system less than 3 hours

Specialized printing equipment coded pictures for cable transmission and reconstructed them at the receiving end (1920s -5 distict levels of gray 1929 ndash 15 levels)

This example is not DIP the computer is not involved

DIP is linked and devolps in the same rhythm as digital computers (data storage display and transmission)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

A digital pictureproduced in 1921from a coded tapeby a telegraph printer withspecial type faces(McFarlane)

A digital picturemade in 1922from a tapepunched after the signals hadcrossed the Atlantic twice(McFarlane)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

1964 Jet Propulsion Laboratory (Pasadena California) processed pictures of the moon transmitted by Ranger 7 (corrected image distortions)

The first picture of the moon by a US spacecraft Ranger 7 took this image July 31 1964 about 17 minutes before impacting the lunar surface(Courtesy of NASA)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

1960-1970 ndash image processing techniques were used in medical image remote Earth resources observations astronomy

1970s ndash invention of CAT (computerized axial tomography)

CAT is a process in which a ring of detectors encircles an object (patient) and a X-ray source concentric with the detector ring rotates about the object The X-ray passes through the patient and the resulted waves are collected at the opposite end by the detectors As the source rotates the procedure is repeated Tomography consists of algorithms that use the sense data to construct an image that represent a ldquoslicerdquo through the object Motion of the object in a direction perpendicular to the ring of detectors produces a set of ldquoslicesrdquo which can be assembled in a 3D information of the inside of the object

Digital Image ProcessingDigital Image Processing

Week 1Week 1

loz geographers use DIP to study pollution patterns from aerial and satellite imagery

loz archeology ndash DIP allowed restoring blurred pictures that recorded rare artifacts lost or damaged after being photographed

loz physics ndash enhance images of experiments (high-energy plasmas electron microscopy)

loz astronomy biology nuclear medicine law enforcement industry

DIP ndash used in solving problems dealing with machine perception ndashextracting from an image information suitable for computer processing (statistical moments Fourier transform coefficients hellip)

loz automatic character recognition industrial machine vision for product assembly and inspection military recognizance automatic processing of fingerprints machine processing of aerial and satellite imagery for weather prediction Internet

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of Fields that Use DIP

Images can be classified according to their sources (visual X-ray hellip)

Energy sources for images electromagnetic energy spectrum acoustic ultrasonic electronic computer- generated

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Electromagnetic waves can be thought as propagating sinusoidal

waves of different wavelength or as a stream of massless particles

each moving in a wavelike pattern with the speed of light Each

massless particle contains a certain amount (bundle) of energy Each

bundle of energy is called a photon If spectral bands are grouped

according to energy per photon we obtain the spectrum shown in the

image above ranging from gamma-rays (highest energy) to radio

waves (lowest energy)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gamma-Ray Imaging

Nuclear medicine astronomical observations

Nuclear medicine

the approach is to inject a patient with a radioactive isotope that emits gamma rays as it decays

Images are produced from the emissions collected by gamma-ray detectors

Images of this sort are used to locate sites of bone pathology (infections tumors)

PET (positron emision tomography) ndash the patient is given a radioactive isotope that emits positrons as it decays

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of gamma-ray imaging

Bone scan PET image

Digital Image ProcessingDigital Image Processing

Week 1Week 1

X-ray imaging

Medical diagnosticindustry astronomy

A X-ray tube is a vacuum tube with a cathode and an anode The cathode is heated causing free electrons to be released The electrons flows at high speed to the positively charged anode When the electrons strike a nucleus energy is released in the form of a X-ray radiation The energy (penetreting power) of the X-rays is controlled by a voltage applied across the anode and by a curent applied to the filament in the cathode

The intensity of the X-rays is modified by absorbtion as they pass through the patient and the resulting energy falling develops it much in the same way that light develops photographic film

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Angiography = contrast-enhancement radiography

Angiograms = images of blood vessels

A catheter is inserted into an artery or vein in the groin The catheter is threaded into the blood vessel and guided to the area to be studied When it reaches the area to be studied a X-ray contrast medium is injected through the catheter This enhances contrast of the blood vessels and enables radiologist to see anyirregularities or blockages

X-rays are used in CAT (computerized axial tomography)

X-rays used in industrial processes (examine circuit boards for flows in manifacturing)

Industrial CAT scans are useful when the parts can be penetreted by X-rays

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of X-ray imaging

Chest X-rayAortic angiogram

Head CT Cygnus LoopCircuit boards

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Ultraviolet Band

Litography industrial inspection microscopy biological imaging astronomical observations

Ultraviolet light is used in fluorescence microscopy

Ultraviolet light is not visible to human eye but when a photon of ultraviolet radiation collides with an electron in an atom of a fluorescent material it elevates the electron to a higher energy level After that the electron relaxes to a lower level and emits light in the form of a lower-energy photon in the visible (red) light regionFluorescence = emission of light by a substance that has absorbed light or

other electromagentic radiation of a different wavelengthFlourescence microscope = uses an excitation light to irradiate a prepared specimen

and then it separates the much weaker radiating fluorescent light from the brighter excitation light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Visible and Infrared Bands

Light microscopy astronomy remote sensing industry law enforcement

LANDSAT satellite ndash obtained and transmitted images of the Earth from space for purpose of monitoring environmental conditions on the planet

Weather observations and prediction produce major applications of multispectral image from satellites

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Table 1 ndash Thematic bands in NASArsquos LANDSAT satellite

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Satellite images of Washington DC area in spectral bands of the Table 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of light microscopy

Taxol (anticancer agent)magnified 250X

Cholesterol(40X)

Microprocessor(60X)

Nickel oxidethin film(600X)

Surface of audio CD(1750X)

Organicsuperconductor(450X)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Automated visual inspection of manufactured goods

a bc de f

a ndash a circuit board controllerb ndash packaged pillesc ndash bottlesd ndash air bubbles in clear-plastic producte ndash cerealf ndash image of intraocular implant

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Microwave Band

The dominant aplication of imaging in the microwave band ndash radar

bull radar has the ability to collect data over virtually any region at any time regardless of weather or ambient light conditions

bull some radar waves can penetrate clouds under certain conditions canpenetrate vegetation ice dry sand

bull sometimes radar is the only way to explore inaccessible regions of theEarthrsquos surface

An imaging radar works like a flash camera it provides its own illumination(microwave pulses) to light an area on the ground and take a snapshot image Instead of a camera lens a radar uses an antenna and a digital device torecord the images In a radar image one can see only the microwave energythat was reflected back toward the radar antenna

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Spaceborne radar image of mountains in southeast Tibet

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Radio Band

medicine astronomy

MRI = Magnetic Resonance Imaging

This technique places the patient in a powerful magnet and passes short pulses of radio waves through his or her body

Each pulse causes a responding pulse of radio waves to be emited by the patinet tissues

The location from which these signals originate and their strength are determined by a computer which produces a 2D picture of a section of the patient

Digital Image ProcessingDigital Image Processing

Week 1Week 1

MRI images of a human knee (left) and spine (right)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Images of the Crab Pulsar covering the electromagnetic spectrum

Gamma X-ray Optical Infrared Radio

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Other Imaging Modalities

acoustic imaging electron microscopy synthetic (computer-generated) imaging

Imaging using sound geological explorations industry medicine

Mineral and oil exploration

For image acquisition over land one of the main approaches is to use a large truck an a large flat steel plate The plate is pressed on the ground by the truck and the truck is vibratedthrough a frequency spectrum up to 100 Hz The strength and the speed of the returning sound waves are determined by the composition of the Earth below the surface These are analysed by a computer and images are generated from the resulting analysis

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - iris

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - fingerprint

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Face detection and recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gender identification

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image morphing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Fundamental Steps in DIP

methods whose input and output are images

methods whose inputs are images but whose outputs are attributes extracted from those images

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are images

bull image acquisition

bull image filtering and enhancement

bull image restoration

bull color image processing

bull wavelets and multiresolution processing

bull compression

bull morphological processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are attributes

bull morphological processing

bull segmentation

bull representation and description

bull object recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image acquisition - may involve preprocessing such as scaling

Image enhancement

bull manipulating an image so that the result is more suitable than the original for a specific operation

bull enhancement is problem oriented

bull there is no general sbquotheoryrsquo of image enhancement

bull enhancement use subjective methods for image emprovement

bull enhancement is based on human subjective preferences regarding what is a bdquogoodrdquo enhancement result

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image restoration

bull improving the appearance of an image

bull restoration is objective - the techniques for restoration are based on mathematical or probabilistic models of image degradation

Color image processing

bull fundamental concept in color models

bull basic color processing in a digital domain

Wavelets and multiresolution processing

representing images in various degree of resolution

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Compression

reducing the storage required to save an image or the bandwidth required to transmit it

Morphological processing

bull tools for extracting image components that are useful in the representation and description of shape

bull a transition from processes that output images to processes that outputimage attributes

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Segmentation

bull partitioning an image into its constituents parts or objects

bull autonomous segmentation is one of the most difficult tasks of DIP

bull the more accurate the segmentation the more likley recognition is to succeed

Representation and description (almost always follows segmentation)

bull segmentation produces either the boundary of a region or all the poits in the region itself

bull converting the data produced by segmentation to a form suitable for computer processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

bull boundary representation the focus is on external shape characteristics such as corners or inflections

bull complete region the focus is on internal properties such as texture or skeletal shape

bull description is also called feature extraction ndash extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another

Object recognition

the process of assigning a label (eg bdquovehiclerdquo) to an object based on itsdescriptors

Knowledge database

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Simplified diagramof a cross sectionof the human eye

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Three membranes enclose the eye the cornea and sclera outer cover the choroid the retina

The cornea is a tough transparent tissue that covers the anterior surface of the eye

Continuous with the cornea the sclera is an opaque membrane that enclosesthe remainder of the optic globe

The choroid lies directly below the sclera This membrane contains a network of blood vessels (major nutrition of the eye) The choroid is pigmented and help reduce the amount of light entering the eyeThe choroid is divided (at its anterior extreme) into the ciliary body and the irisThe iris contracts and expands to control the amount of light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to ciliary body (60-70 water 6 fat protein) The lens is colored in slightly yellow The lens absorbs approximatively 8 of the visiblelight (infrared and ultraviolet light are absorbed by proteins in lens)

The innermost membrane is the retina When the eye is proper focusedlight from an object outside the eye is imaged on the retinaVision is possible because of the distribution of discrete light receptors on the surface of the retina cones and rods (6-7 milion cones 75-150 milion rods)

Cones located in the central part of the retina (fovea) they are sensitive to colors vision of detail each cone is link to its own nervecone vision = photopic or bright-light vision

Fovea = the place where the image of the object of interest falls on

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Rods distributed over al the retina surface several rods are contected toa single nerve not specialized in detail vision serve to give a general overall picture of the filed of viewnot involved in color visionsensitive to low level of illumination

Blind spot region without receptors

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 12: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The Origins of DIP

Newspaper industry pictures were sent by submarine cable between London and New York

Before Bartlane cable picture transmission system (early 1920s) ndash 1 week

With Bartlane system less than 3 hours

Specialized printing equipment coded pictures for cable transmission and reconstructed them at the receiving end (1920s -5 distict levels of gray 1929 ndash 15 levels)

This example is not DIP the computer is not involved

DIP is linked and devolps in the same rhythm as digital computers (data storage display and transmission)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

A digital pictureproduced in 1921from a coded tapeby a telegraph printer withspecial type faces(McFarlane)

A digital picturemade in 1922from a tapepunched after the signals hadcrossed the Atlantic twice(McFarlane)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

1964 Jet Propulsion Laboratory (Pasadena California) processed pictures of the moon transmitted by Ranger 7 (corrected image distortions)

The first picture of the moon by a US spacecraft Ranger 7 took this image July 31 1964 about 17 minutes before impacting the lunar surface(Courtesy of NASA)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

1960-1970 ndash image processing techniques were used in medical image remote Earth resources observations astronomy

1970s ndash invention of CAT (computerized axial tomography)

CAT is a process in which a ring of detectors encircles an object (patient) and a X-ray source concentric with the detector ring rotates about the object The X-ray passes through the patient and the resulted waves are collected at the opposite end by the detectors As the source rotates the procedure is repeated Tomography consists of algorithms that use the sense data to construct an image that represent a ldquoslicerdquo through the object Motion of the object in a direction perpendicular to the ring of detectors produces a set of ldquoslicesrdquo which can be assembled in a 3D information of the inside of the object

Digital Image ProcessingDigital Image Processing

Week 1Week 1

loz geographers use DIP to study pollution patterns from aerial and satellite imagery

loz archeology ndash DIP allowed restoring blurred pictures that recorded rare artifacts lost or damaged after being photographed

loz physics ndash enhance images of experiments (high-energy plasmas electron microscopy)

loz astronomy biology nuclear medicine law enforcement industry

DIP ndash used in solving problems dealing with machine perception ndashextracting from an image information suitable for computer processing (statistical moments Fourier transform coefficients hellip)

loz automatic character recognition industrial machine vision for product assembly and inspection military recognizance automatic processing of fingerprints machine processing of aerial and satellite imagery for weather prediction Internet

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of Fields that Use DIP

Images can be classified according to their sources (visual X-ray hellip)

Energy sources for images electromagnetic energy spectrum acoustic ultrasonic electronic computer- generated

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Electromagnetic waves can be thought as propagating sinusoidal

waves of different wavelength or as a stream of massless particles

each moving in a wavelike pattern with the speed of light Each

massless particle contains a certain amount (bundle) of energy Each

bundle of energy is called a photon If spectral bands are grouped

according to energy per photon we obtain the spectrum shown in the

image above ranging from gamma-rays (highest energy) to radio

waves (lowest energy)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gamma-Ray Imaging

Nuclear medicine astronomical observations

Nuclear medicine

the approach is to inject a patient with a radioactive isotope that emits gamma rays as it decays

Images are produced from the emissions collected by gamma-ray detectors

Images of this sort are used to locate sites of bone pathology (infections tumors)

PET (positron emision tomography) ndash the patient is given a radioactive isotope that emits positrons as it decays

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of gamma-ray imaging

Bone scan PET image

Digital Image ProcessingDigital Image Processing

Week 1Week 1

X-ray imaging

Medical diagnosticindustry astronomy

A X-ray tube is a vacuum tube with a cathode and an anode The cathode is heated causing free electrons to be released The electrons flows at high speed to the positively charged anode When the electrons strike a nucleus energy is released in the form of a X-ray radiation The energy (penetreting power) of the X-rays is controlled by a voltage applied across the anode and by a curent applied to the filament in the cathode

The intensity of the X-rays is modified by absorbtion as they pass through the patient and the resulting energy falling develops it much in the same way that light develops photographic film

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Angiography = contrast-enhancement radiography

Angiograms = images of blood vessels

A catheter is inserted into an artery or vein in the groin The catheter is threaded into the blood vessel and guided to the area to be studied When it reaches the area to be studied a X-ray contrast medium is injected through the catheter This enhances contrast of the blood vessels and enables radiologist to see anyirregularities or blockages

X-rays are used in CAT (computerized axial tomography)

X-rays used in industrial processes (examine circuit boards for flows in manifacturing)

Industrial CAT scans are useful when the parts can be penetreted by X-rays

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of X-ray imaging

Chest X-rayAortic angiogram

Head CT Cygnus LoopCircuit boards

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Ultraviolet Band

Litography industrial inspection microscopy biological imaging astronomical observations

Ultraviolet light is used in fluorescence microscopy

Ultraviolet light is not visible to human eye but when a photon of ultraviolet radiation collides with an electron in an atom of a fluorescent material it elevates the electron to a higher energy level After that the electron relaxes to a lower level and emits light in the form of a lower-energy photon in the visible (red) light regionFluorescence = emission of light by a substance that has absorbed light or

other electromagentic radiation of a different wavelengthFlourescence microscope = uses an excitation light to irradiate a prepared specimen

and then it separates the much weaker radiating fluorescent light from the brighter excitation light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Visible and Infrared Bands

Light microscopy astronomy remote sensing industry law enforcement

LANDSAT satellite ndash obtained and transmitted images of the Earth from space for purpose of monitoring environmental conditions on the planet

Weather observations and prediction produce major applications of multispectral image from satellites

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Table 1 ndash Thematic bands in NASArsquos LANDSAT satellite

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Satellite images of Washington DC area in spectral bands of the Table 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of light microscopy

Taxol (anticancer agent)magnified 250X

Cholesterol(40X)

Microprocessor(60X)

Nickel oxidethin film(600X)

Surface of audio CD(1750X)

Organicsuperconductor(450X)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Automated visual inspection of manufactured goods

a bc de f

a ndash a circuit board controllerb ndash packaged pillesc ndash bottlesd ndash air bubbles in clear-plastic producte ndash cerealf ndash image of intraocular implant

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Microwave Band

The dominant aplication of imaging in the microwave band ndash radar

bull radar has the ability to collect data over virtually any region at any time regardless of weather or ambient light conditions

bull some radar waves can penetrate clouds under certain conditions canpenetrate vegetation ice dry sand

bull sometimes radar is the only way to explore inaccessible regions of theEarthrsquos surface

An imaging radar works like a flash camera it provides its own illumination(microwave pulses) to light an area on the ground and take a snapshot image Instead of a camera lens a radar uses an antenna and a digital device torecord the images In a radar image one can see only the microwave energythat was reflected back toward the radar antenna

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Spaceborne radar image of mountains in southeast Tibet

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Radio Band

medicine astronomy

MRI = Magnetic Resonance Imaging

This technique places the patient in a powerful magnet and passes short pulses of radio waves through his or her body

Each pulse causes a responding pulse of radio waves to be emited by the patinet tissues

The location from which these signals originate and their strength are determined by a computer which produces a 2D picture of a section of the patient

Digital Image ProcessingDigital Image Processing

Week 1Week 1

MRI images of a human knee (left) and spine (right)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Images of the Crab Pulsar covering the electromagnetic spectrum

Gamma X-ray Optical Infrared Radio

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Other Imaging Modalities

acoustic imaging electron microscopy synthetic (computer-generated) imaging

Imaging using sound geological explorations industry medicine

Mineral and oil exploration

For image acquisition over land one of the main approaches is to use a large truck an a large flat steel plate The plate is pressed on the ground by the truck and the truck is vibratedthrough a frequency spectrum up to 100 Hz The strength and the speed of the returning sound waves are determined by the composition of the Earth below the surface These are analysed by a computer and images are generated from the resulting analysis

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - iris

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - fingerprint

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Face detection and recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gender identification

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image morphing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Fundamental Steps in DIP

methods whose input and output are images

methods whose inputs are images but whose outputs are attributes extracted from those images

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are images

bull image acquisition

bull image filtering and enhancement

bull image restoration

bull color image processing

bull wavelets and multiresolution processing

bull compression

bull morphological processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are attributes

bull morphological processing

bull segmentation

bull representation and description

bull object recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image acquisition - may involve preprocessing such as scaling

Image enhancement

bull manipulating an image so that the result is more suitable than the original for a specific operation

bull enhancement is problem oriented

bull there is no general sbquotheoryrsquo of image enhancement

bull enhancement use subjective methods for image emprovement

bull enhancement is based on human subjective preferences regarding what is a bdquogoodrdquo enhancement result

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image restoration

bull improving the appearance of an image

bull restoration is objective - the techniques for restoration are based on mathematical or probabilistic models of image degradation

Color image processing

bull fundamental concept in color models

bull basic color processing in a digital domain

Wavelets and multiresolution processing

representing images in various degree of resolution

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Compression

reducing the storage required to save an image or the bandwidth required to transmit it

Morphological processing

bull tools for extracting image components that are useful in the representation and description of shape

bull a transition from processes that output images to processes that outputimage attributes

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Segmentation

bull partitioning an image into its constituents parts or objects

bull autonomous segmentation is one of the most difficult tasks of DIP

bull the more accurate the segmentation the more likley recognition is to succeed

Representation and description (almost always follows segmentation)

bull segmentation produces either the boundary of a region or all the poits in the region itself

bull converting the data produced by segmentation to a form suitable for computer processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

bull boundary representation the focus is on external shape characteristics such as corners or inflections

bull complete region the focus is on internal properties such as texture or skeletal shape

bull description is also called feature extraction ndash extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another

Object recognition

the process of assigning a label (eg bdquovehiclerdquo) to an object based on itsdescriptors

Knowledge database

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Simplified diagramof a cross sectionof the human eye

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Three membranes enclose the eye the cornea and sclera outer cover the choroid the retina

The cornea is a tough transparent tissue that covers the anterior surface of the eye

Continuous with the cornea the sclera is an opaque membrane that enclosesthe remainder of the optic globe

The choroid lies directly below the sclera This membrane contains a network of blood vessels (major nutrition of the eye) The choroid is pigmented and help reduce the amount of light entering the eyeThe choroid is divided (at its anterior extreme) into the ciliary body and the irisThe iris contracts and expands to control the amount of light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to ciliary body (60-70 water 6 fat protein) The lens is colored in slightly yellow The lens absorbs approximatively 8 of the visiblelight (infrared and ultraviolet light are absorbed by proteins in lens)

The innermost membrane is the retina When the eye is proper focusedlight from an object outside the eye is imaged on the retinaVision is possible because of the distribution of discrete light receptors on the surface of the retina cones and rods (6-7 milion cones 75-150 milion rods)

Cones located in the central part of the retina (fovea) they are sensitive to colors vision of detail each cone is link to its own nervecone vision = photopic or bright-light vision

Fovea = the place where the image of the object of interest falls on

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Rods distributed over al the retina surface several rods are contected toa single nerve not specialized in detail vision serve to give a general overall picture of the filed of viewnot involved in color visionsensitive to low level of illumination

Blind spot region without receptors

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 13: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

A digital pictureproduced in 1921from a coded tapeby a telegraph printer withspecial type faces(McFarlane)

A digital picturemade in 1922from a tapepunched after the signals hadcrossed the Atlantic twice(McFarlane)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

1964 Jet Propulsion Laboratory (Pasadena California) processed pictures of the moon transmitted by Ranger 7 (corrected image distortions)

The first picture of the moon by a US spacecraft Ranger 7 took this image July 31 1964 about 17 minutes before impacting the lunar surface(Courtesy of NASA)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

1960-1970 ndash image processing techniques were used in medical image remote Earth resources observations astronomy

1970s ndash invention of CAT (computerized axial tomography)

CAT is a process in which a ring of detectors encircles an object (patient) and a X-ray source concentric with the detector ring rotates about the object The X-ray passes through the patient and the resulted waves are collected at the opposite end by the detectors As the source rotates the procedure is repeated Tomography consists of algorithms that use the sense data to construct an image that represent a ldquoslicerdquo through the object Motion of the object in a direction perpendicular to the ring of detectors produces a set of ldquoslicesrdquo which can be assembled in a 3D information of the inside of the object

Digital Image ProcessingDigital Image Processing

Week 1Week 1

loz geographers use DIP to study pollution patterns from aerial and satellite imagery

loz archeology ndash DIP allowed restoring blurred pictures that recorded rare artifacts lost or damaged after being photographed

loz physics ndash enhance images of experiments (high-energy plasmas electron microscopy)

loz astronomy biology nuclear medicine law enforcement industry

DIP ndash used in solving problems dealing with machine perception ndashextracting from an image information suitable for computer processing (statistical moments Fourier transform coefficients hellip)

loz automatic character recognition industrial machine vision for product assembly and inspection military recognizance automatic processing of fingerprints machine processing of aerial and satellite imagery for weather prediction Internet

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of Fields that Use DIP

Images can be classified according to their sources (visual X-ray hellip)

Energy sources for images electromagnetic energy spectrum acoustic ultrasonic electronic computer- generated

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Electromagnetic waves can be thought as propagating sinusoidal

waves of different wavelength or as a stream of massless particles

each moving in a wavelike pattern with the speed of light Each

massless particle contains a certain amount (bundle) of energy Each

bundle of energy is called a photon If spectral bands are grouped

according to energy per photon we obtain the spectrum shown in the

image above ranging from gamma-rays (highest energy) to radio

waves (lowest energy)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gamma-Ray Imaging

Nuclear medicine astronomical observations

Nuclear medicine

the approach is to inject a patient with a radioactive isotope that emits gamma rays as it decays

Images are produced from the emissions collected by gamma-ray detectors

Images of this sort are used to locate sites of bone pathology (infections tumors)

PET (positron emision tomography) ndash the patient is given a radioactive isotope that emits positrons as it decays

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of gamma-ray imaging

Bone scan PET image

Digital Image ProcessingDigital Image Processing

Week 1Week 1

X-ray imaging

Medical diagnosticindustry astronomy

A X-ray tube is a vacuum tube with a cathode and an anode The cathode is heated causing free electrons to be released The electrons flows at high speed to the positively charged anode When the electrons strike a nucleus energy is released in the form of a X-ray radiation The energy (penetreting power) of the X-rays is controlled by a voltage applied across the anode and by a curent applied to the filament in the cathode

The intensity of the X-rays is modified by absorbtion as they pass through the patient and the resulting energy falling develops it much in the same way that light develops photographic film

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Angiography = contrast-enhancement radiography

Angiograms = images of blood vessels

A catheter is inserted into an artery or vein in the groin The catheter is threaded into the blood vessel and guided to the area to be studied When it reaches the area to be studied a X-ray contrast medium is injected through the catheter This enhances contrast of the blood vessels and enables radiologist to see anyirregularities or blockages

X-rays are used in CAT (computerized axial tomography)

X-rays used in industrial processes (examine circuit boards for flows in manifacturing)

Industrial CAT scans are useful when the parts can be penetreted by X-rays

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of X-ray imaging

Chest X-rayAortic angiogram

Head CT Cygnus LoopCircuit boards

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Ultraviolet Band

Litography industrial inspection microscopy biological imaging astronomical observations

Ultraviolet light is used in fluorescence microscopy

Ultraviolet light is not visible to human eye but when a photon of ultraviolet radiation collides with an electron in an atom of a fluorescent material it elevates the electron to a higher energy level After that the electron relaxes to a lower level and emits light in the form of a lower-energy photon in the visible (red) light regionFluorescence = emission of light by a substance that has absorbed light or

other electromagentic radiation of a different wavelengthFlourescence microscope = uses an excitation light to irradiate a prepared specimen

and then it separates the much weaker radiating fluorescent light from the brighter excitation light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Visible and Infrared Bands

Light microscopy astronomy remote sensing industry law enforcement

LANDSAT satellite ndash obtained and transmitted images of the Earth from space for purpose of monitoring environmental conditions on the planet

Weather observations and prediction produce major applications of multispectral image from satellites

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Table 1 ndash Thematic bands in NASArsquos LANDSAT satellite

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Satellite images of Washington DC area in spectral bands of the Table 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of light microscopy

Taxol (anticancer agent)magnified 250X

Cholesterol(40X)

Microprocessor(60X)

Nickel oxidethin film(600X)

Surface of audio CD(1750X)

Organicsuperconductor(450X)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Automated visual inspection of manufactured goods

a bc de f

a ndash a circuit board controllerb ndash packaged pillesc ndash bottlesd ndash air bubbles in clear-plastic producte ndash cerealf ndash image of intraocular implant

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Microwave Band

The dominant aplication of imaging in the microwave band ndash radar

bull radar has the ability to collect data over virtually any region at any time regardless of weather or ambient light conditions

bull some radar waves can penetrate clouds under certain conditions canpenetrate vegetation ice dry sand

bull sometimes radar is the only way to explore inaccessible regions of theEarthrsquos surface

An imaging radar works like a flash camera it provides its own illumination(microwave pulses) to light an area on the ground and take a snapshot image Instead of a camera lens a radar uses an antenna and a digital device torecord the images In a radar image one can see only the microwave energythat was reflected back toward the radar antenna

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Spaceborne radar image of mountains in southeast Tibet

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Radio Band

medicine astronomy

MRI = Magnetic Resonance Imaging

This technique places the patient in a powerful magnet and passes short pulses of radio waves through his or her body

Each pulse causes a responding pulse of radio waves to be emited by the patinet tissues

The location from which these signals originate and their strength are determined by a computer which produces a 2D picture of a section of the patient

Digital Image ProcessingDigital Image Processing

Week 1Week 1

MRI images of a human knee (left) and spine (right)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Images of the Crab Pulsar covering the electromagnetic spectrum

Gamma X-ray Optical Infrared Radio

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Other Imaging Modalities

acoustic imaging electron microscopy synthetic (computer-generated) imaging

Imaging using sound geological explorations industry medicine

Mineral and oil exploration

For image acquisition over land one of the main approaches is to use a large truck an a large flat steel plate The plate is pressed on the ground by the truck and the truck is vibratedthrough a frequency spectrum up to 100 Hz The strength and the speed of the returning sound waves are determined by the composition of the Earth below the surface These are analysed by a computer and images are generated from the resulting analysis

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - iris

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - fingerprint

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Face detection and recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gender identification

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image morphing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Fundamental Steps in DIP

methods whose input and output are images

methods whose inputs are images but whose outputs are attributes extracted from those images

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are images

bull image acquisition

bull image filtering and enhancement

bull image restoration

bull color image processing

bull wavelets and multiresolution processing

bull compression

bull morphological processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are attributes

bull morphological processing

bull segmentation

bull representation and description

bull object recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image acquisition - may involve preprocessing such as scaling

Image enhancement

bull manipulating an image so that the result is more suitable than the original for a specific operation

bull enhancement is problem oriented

bull there is no general sbquotheoryrsquo of image enhancement

bull enhancement use subjective methods for image emprovement

bull enhancement is based on human subjective preferences regarding what is a bdquogoodrdquo enhancement result

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image restoration

bull improving the appearance of an image

bull restoration is objective - the techniques for restoration are based on mathematical or probabilistic models of image degradation

Color image processing

bull fundamental concept in color models

bull basic color processing in a digital domain

Wavelets and multiresolution processing

representing images in various degree of resolution

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Compression

reducing the storage required to save an image or the bandwidth required to transmit it

Morphological processing

bull tools for extracting image components that are useful in the representation and description of shape

bull a transition from processes that output images to processes that outputimage attributes

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Segmentation

bull partitioning an image into its constituents parts or objects

bull autonomous segmentation is one of the most difficult tasks of DIP

bull the more accurate the segmentation the more likley recognition is to succeed

Representation and description (almost always follows segmentation)

bull segmentation produces either the boundary of a region or all the poits in the region itself

bull converting the data produced by segmentation to a form suitable for computer processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

bull boundary representation the focus is on external shape characteristics such as corners or inflections

bull complete region the focus is on internal properties such as texture or skeletal shape

bull description is also called feature extraction ndash extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another

Object recognition

the process of assigning a label (eg bdquovehiclerdquo) to an object based on itsdescriptors

Knowledge database

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Simplified diagramof a cross sectionof the human eye

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Three membranes enclose the eye the cornea and sclera outer cover the choroid the retina

The cornea is a tough transparent tissue that covers the anterior surface of the eye

Continuous with the cornea the sclera is an opaque membrane that enclosesthe remainder of the optic globe

The choroid lies directly below the sclera This membrane contains a network of blood vessels (major nutrition of the eye) The choroid is pigmented and help reduce the amount of light entering the eyeThe choroid is divided (at its anterior extreme) into the ciliary body and the irisThe iris contracts and expands to control the amount of light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to ciliary body (60-70 water 6 fat protein) The lens is colored in slightly yellow The lens absorbs approximatively 8 of the visiblelight (infrared and ultraviolet light are absorbed by proteins in lens)

The innermost membrane is the retina When the eye is proper focusedlight from an object outside the eye is imaged on the retinaVision is possible because of the distribution of discrete light receptors on the surface of the retina cones and rods (6-7 milion cones 75-150 milion rods)

Cones located in the central part of the retina (fovea) they are sensitive to colors vision of detail each cone is link to its own nervecone vision = photopic or bright-light vision

Fovea = the place where the image of the object of interest falls on

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Rods distributed over al the retina surface several rods are contected toa single nerve not specialized in detail vision serve to give a general overall picture of the filed of viewnot involved in color visionsensitive to low level of illumination

Blind spot region without receptors

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 14: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

1964 Jet Propulsion Laboratory (Pasadena California) processed pictures of the moon transmitted by Ranger 7 (corrected image distortions)

The first picture of the moon by a US spacecraft Ranger 7 took this image July 31 1964 about 17 minutes before impacting the lunar surface(Courtesy of NASA)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

1960-1970 ndash image processing techniques were used in medical image remote Earth resources observations astronomy

1970s ndash invention of CAT (computerized axial tomography)

CAT is a process in which a ring of detectors encircles an object (patient) and a X-ray source concentric with the detector ring rotates about the object The X-ray passes through the patient and the resulted waves are collected at the opposite end by the detectors As the source rotates the procedure is repeated Tomography consists of algorithms that use the sense data to construct an image that represent a ldquoslicerdquo through the object Motion of the object in a direction perpendicular to the ring of detectors produces a set of ldquoslicesrdquo which can be assembled in a 3D information of the inside of the object

Digital Image ProcessingDigital Image Processing

Week 1Week 1

loz geographers use DIP to study pollution patterns from aerial and satellite imagery

loz archeology ndash DIP allowed restoring blurred pictures that recorded rare artifacts lost or damaged after being photographed

loz physics ndash enhance images of experiments (high-energy plasmas electron microscopy)

loz astronomy biology nuclear medicine law enforcement industry

DIP ndash used in solving problems dealing with machine perception ndashextracting from an image information suitable for computer processing (statistical moments Fourier transform coefficients hellip)

loz automatic character recognition industrial machine vision for product assembly and inspection military recognizance automatic processing of fingerprints machine processing of aerial and satellite imagery for weather prediction Internet

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of Fields that Use DIP

Images can be classified according to their sources (visual X-ray hellip)

Energy sources for images electromagnetic energy spectrum acoustic ultrasonic electronic computer- generated

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Electromagnetic waves can be thought as propagating sinusoidal

waves of different wavelength or as a stream of massless particles

each moving in a wavelike pattern with the speed of light Each

massless particle contains a certain amount (bundle) of energy Each

bundle of energy is called a photon If spectral bands are grouped

according to energy per photon we obtain the spectrum shown in the

image above ranging from gamma-rays (highest energy) to radio

waves (lowest energy)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gamma-Ray Imaging

Nuclear medicine astronomical observations

Nuclear medicine

the approach is to inject a patient with a radioactive isotope that emits gamma rays as it decays

Images are produced from the emissions collected by gamma-ray detectors

Images of this sort are used to locate sites of bone pathology (infections tumors)

PET (positron emision tomography) ndash the patient is given a radioactive isotope that emits positrons as it decays

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of gamma-ray imaging

Bone scan PET image

Digital Image ProcessingDigital Image Processing

Week 1Week 1

X-ray imaging

Medical diagnosticindustry astronomy

A X-ray tube is a vacuum tube with a cathode and an anode The cathode is heated causing free electrons to be released The electrons flows at high speed to the positively charged anode When the electrons strike a nucleus energy is released in the form of a X-ray radiation The energy (penetreting power) of the X-rays is controlled by a voltage applied across the anode and by a curent applied to the filament in the cathode

The intensity of the X-rays is modified by absorbtion as they pass through the patient and the resulting energy falling develops it much in the same way that light develops photographic film

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Angiography = contrast-enhancement radiography

Angiograms = images of blood vessels

A catheter is inserted into an artery or vein in the groin The catheter is threaded into the blood vessel and guided to the area to be studied When it reaches the area to be studied a X-ray contrast medium is injected through the catheter This enhances contrast of the blood vessels and enables radiologist to see anyirregularities or blockages

X-rays are used in CAT (computerized axial tomography)

X-rays used in industrial processes (examine circuit boards for flows in manifacturing)

Industrial CAT scans are useful when the parts can be penetreted by X-rays

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of X-ray imaging

Chest X-rayAortic angiogram

Head CT Cygnus LoopCircuit boards

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Ultraviolet Band

Litography industrial inspection microscopy biological imaging astronomical observations

Ultraviolet light is used in fluorescence microscopy

Ultraviolet light is not visible to human eye but when a photon of ultraviolet radiation collides with an electron in an atom of a fluorescent material it elevates the electron to a higher energy level After that the electron relaxes to a lower level and emits light in the form of a lower-energy photon in the visible (red) light regionFluorescence = emission of light by a substance that has absorbed light or

other electromagentic radiation of a different wavelengthFlourescence microscope = uses an excitation light to irradiate a prepared specimen

and then it separates the much weaker radiating fluorescent light from the brighter excitation light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Visible and Infrared Bands

Light microscopy astronomy remote sensing industry law enforcement

LANDSAT satellite ndash obtained and transmitted images of the Earth from space for purpose of monitoring environmental conditions on the planet

Weather observations and prediction produce major applications of multispectral image from satellites

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Table 1 ndash Thematic bands in NASArsquos LANDSAT satellite

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Satellite images of Washington DC area in spectral bands of the Table 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of light microscopy

Taxol (anticancer agent)magnified 250X

Cholesterol(40X)

Microprocessor(60X)

Nickel oxidethin film(600X)

Surface of audio CD(1750X)

Organicsuperconductor(450X)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Automated visual inspection of manufactured goods

a bc de f

a ndash a circuit board controllerb ndash packaged pillesc ndash bottlesd ndash air bubbles in clear-plastic producte ndash cerealf ndash image of intraocular implant

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Microwave Band

The dominant aplication of imaging in the microwave band ndash radar

bull radar has the ability to collect data over virtually any region at any time regardless of weather or ambient light conditions

bull some radar waves can penetrate clouds under certain conditions canpenetrate vegetation ice dry sand

bull sometimes radar is the only way to explore inaccessible regions of theEarthrsquos surface

An imaging radar works like a flash camera it provides its own illumination(microwave pulses) to light an area on the ground and take a snapshot image Instead of a camera lens a radar uses an antenna and a digital device torecord the images In a radar image one can see only the microwave energythat was reflected back toward the radar antenna

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Spaceborne radar image of mountains in southeast Tibet

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Radio Band

medicine astronomy

MRI = Magnetic Resonance Imaging

This technique places the patient in a powerful magnet and passes short pulses of radio waves through his or her body

Each pulse causes a responding pulse of radio waves to be emited by the patinet tissues

The location from which these signals originate and their strength are determined by a computer which produces a 2D picture of a section of the patient

Digital Image ProcessingDigital Image Processing

Week 1Week 1

MRI images of a human knee (left) and spine (right)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Images of the Crab Pulsar covering the electromagnetic spectrum

Gamma X-ray Optical Infrared Radio

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Other Imaging Modalities

acoustic imaging electron microscopy synthetic (computer-generated) imaging

Imaging using sound geological explorations industry medicine

Mineral and oil exploration

For image acquisition over land one of the main approaches is to use a large truck an a large flat steel plate The plate is pressed on the ground by the truck and the truck is vibratedthrough a frequency spectrum up to 100 Hz The strength and the speed of the returning sound waves are determined by the composition of the Earth below the surface These are analysed by a computer and images are generated from the resulting analysis

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - iris

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - fingerprint

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Face detection and recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gender identification

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image morphing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Fundamental Steps in DIP

methods whose input and output are images

methods whose inputs are images but whose outputs are attributes extracted from those images

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are images

bull image acquisition

bull image filtering and enhancement

bull image restoration

bull color image processing

bull wavelets and multiresolution processing

bull compression

bull morphological processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are attributes

bull morphological processing

bull segmentation

bull representation and description

bull object recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image acquisition - may involve preprocessing such as scaling

Image enhancement

bull manipulating an image so that the result is more suitable than the original for a specific operation

bull enhancement is problem oriented

bull there is no general sbquotheoryrsquo of image enhancement

bull enhancement use subjective methods for image emprovement

bull enhancement is based on human subjective preferences regarding what is a bdquogoodrdquo enhancement result

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image restoration

bull improving the appearance of an image

bull restoration is objective - the techniques for restoration are based on mathematical or probabilistic models of image degradation

Color image processing

bull fundamental concept in color models

bull basic color processing in a digital domain

Wavelets and multiresolution processing

representing images in various degree of resolution

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Compression

reducing the storage required to save an image or the bandwidth required to transmit it

Morphological processing

bull tools for extracting image components that are useful in the representation and description of shape

bull a transition from processes that output images to processes that outputimage attributes

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Segmentation

bull partitioning an image into its constituents parts or objects

bull autonomous segmentation is one of the most difficult tasks of DIP

bull the more accurate the segmentation the more likley recognition is to succeed

Representation and description (almost always follows segmentation)

bull segmentation produces either the boundary of a region or all the poits in the region itself

bull converting the data produced by segmentation to a form suitable for computer processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

bull boundary representation the focus is on external shape characteristics such as corners or inflections

bull complete region the focus is on internal properties such as texture or skeletal shape

bull description is also called feature extraction ndash extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another

Object recognition

the process of assigning a label (eg bdquovehiclerdquo) to an object based on itsdescriptors

Knowledge database

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Simplified diagramof a cross sectionof the human eye

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Three membranes enclose the eye the cornea and sclera outer cover the choroid the retina

The cornea is a tough transparent tissue that covers the anterior surface of the eye

Continuous with the cornea the sclera is an opaque membrane that enclosesthe remainder of the optic globe

The choroid lies directly below the sclera This membrane contains a network of blood vessels (major nutrition of the eye) The choroid is pigmented and help reduce the amount of light entering the eyeThe choroid is divided (at its anterior extreme) into the ciliary body and the irisThe iris contracts and expands to control the amount of light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to ciliary body (60-70 water 6 fat protein) The lens is colored in slightly yellow The lens absorbs approximatively 8 of the visiblelight (infrared and ultraviolet light are absorbed by proteins in lens)

The innermost membrane is the retina When the eye is proper focusedlight from an object outside the eye is imaged on the retinaVision is possible because of the distribution of discrete light receptors on the surface of the retina cones and rods (6-7 milion cones 75-150 milion rods)

Cones located in the central part of the retina (fovea) they are sensitive to colors vision of detail each cone is link to its own nervecone vision = photopic or bright-light vision

Fovea = the place where the image of the object of interest falls on

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Rods distributed over al the retina surface several rods are contected toa single nerve not specialized in detail vision serve to give a general overall picture of the filed of viewnot involved in color visionsensitive to low level of illumination

Blind spot region without receptors

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 15: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

1960-1970 ndash image processing techniques were used in medical image remote Earth resources observations astronomy

1970s ndash invention of CAT (computerized axial tomography)

CAT is a process in which a ring of detectors encircles an object (patient) and a X-ray source concentric with the detector ring rotates about the object The X-ray passes through the patient and the resulted waves are collected at the opposite end by the detectors As the source rotates the procedure is repeated Tomography consists of algorithms that use the sense data to construct an image that represent a ldquoslicerdquo through the object Motion of the object in a direction perpendicular to the ring of detectors produces a set of ldquoslicesrdquo which can be assembled in a 3D information of the inside of the object

Digital Image ProcessingDigital Image Processing

Week 1Week 1

loz geographers use DIP to study pollution patterns from aerial and satellite imagery

loz archeology ndash DIP allowed restoring blurred pictures that recorded rare artifacts lost or damaged after being photographed

loz physics ndash enhance images of experiments (high-energy plasmas electron microscopy)

loz astronomy biology nuclear medicine law enforcement industry

DIP ndash used in solving problems dealing with machine perception ndashextracting from an image information suitable for computer processing (statistical moments Fourier transform coefficients hellip)

loz automatic character recognition industrial machine vision for product assembly and inspection military recognizance automatic processing of fingerprints machine processing of aerial and satellite imagery for weather prediction Internet

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of Fields that Use DIP

Images can be classified according to their sources (visual X-ray hellip)

Energy sources for images electromagnetic energy spectrum acoustic ultrasonic electronic computer- generated

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Electromagnetic waves can be thought as propagating sinusoidal

waves of different wavelength or as a stream of massless particles

each moving in a wavelike pattern with the speed of light Each

massless particle contains a certain amount (bundle) of energy Each

bundle of energy is called a photon If spectral bands are grouped

according to energy per photon we obtain the spectrum shown in the

image above ranging from gamma-rays (highest energy) to radio

waves (lowest energy)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gamma-Ray Imaging

Nuclear medicine astronomical observations

Nuclear medicine

the approach is to inject a patient with a radioactive isotope that emits gamma rays as it decays

Images are produced from the emissions collected by gamma-ray detectors

Images of this sort are used to locate sites of bone pathology (infections tumors)

PET (positron emision tomography) ndash the patient is given a radioactive isotope that emits positrons as it decays

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of gamma-ray imaging

Bone scan PET image

Digital Image ProcessingDigital Image Processing

Week 1Week 1

X-ray imaging

Medical diagnosticindustry astronomy

A X-ray tube is a vacuum tube with a cathode and an anode The cathode is heated causing free electrons to be released The electrons flows at high speed to the positively charged anode When the electrons strike a nucleus energy is released in the form of a X-ray radiation The energy (penetreting power) of the X-rays is controlled by a voltage applied across the anode and by a curent applied to the filament in the cathode

The intensity of the X-rays is modified by absorbtion as they pass through the patient and the resulting energy falling develops it much in the same way that light develops photographic film

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Angiography = contrast-enhancement radiography

Angiograms = images of blood vessels

A catheter is inserted into an artery or vein in the groin The catheter is threaded into the blood vessel and guided to the area to be studied When it reaches the area to be studied a X-ray contrast medium is injected through the catheter This enhances contrast of the blood vessels and enables radiologist to see anyirregularities or blockages

X-rays are used in CAT (computerized axial tomography)

X-rays used in industrial processes (examine circuit boards for flows in manifacturing)

Industrial CAT scans are useful when the parts can be penetreted by X-rays

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of X-ray imaging

Chest X-rayAortic angiogram

Head CT Cygnus LoopCircuit boards

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Ultraviolet Band

Litography industrial inspection microscopy biological imaging astronomical observations

Ultraviolet light is used in fluorescence microscopy

Ultraviolet light is not visible to human eye but when a photon of ultraviolet radiation collides with an electron in an atom of a fluorescent material it elevates the electron to a higher energy level After that the electron relaxes to a lower level and emits light in the form of a lower-energy photon in the visible (red) light regionFluorescence = emission of light by a substance that has absorbed light or

other electromagentic radiation of a different wavelengthFlourescence microscope = uses an excitation light to irradiate a prepared specimen

and then it separates the much weaker radiating fluorescent light from the brighter excitation light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Visible and Infrared Bands

Light microscopy astronomy remote sensing industry law enforcement

LANDSAT satellite ndash obtained and transmitted images of the Earth from space for purpose of monitoring environmental conditions on the planet

Weather observations and prediction produce major applications of multispectral image from satellites

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Table 1 ndash Thematic bands in NASArsquos LANDSAT satellite

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Satellite images of Washington DC area in spectral bands of the Table 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of light microscopy

Taxol (anticancer agent)magnified 250X

Cholesterol(40X)

Microprocessor(60X)

Nickel oxidethin film(600X)

Surface of audio CD(1750X)

Organicsuperconductor(450X)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Automated visual inspection of manufactured goods

a bc de f

a ndash a circuit board controllerb ndash packaged pillesc ndash bottlesd ndash air bubbles in clear-plastic producte ndash cerealf ndash image of intraocular implant

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Microwave Band

The dominant aplication of imaging in the microwave band ndash radar

bull radar has the ability to collect data over virtually any region at any time regardless of weather or ambient light conditions

bull some radar waves can penetrate clouds under certain conditions canpenetrate vegetation ice dry sand

bull sometimes radar is the only way to explore inaccessible regions of theEarthrsquos surface

An imaging radar works like a flash camera it provides its own illumination(microwave pulses) to light an area on the ground and take a snapshot image Instead of a camera lens a radar uses an antenna and a digital device torecord the images In a radar image one can see only the microwave energythat was reflected back toward the radar antenna

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Spaceborne radar image of mountains in southeast Tibet

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Radio Band

medicine astronomy

MRI = Magnetic Resonance Imaging

This technique places the patient in a powerful magnet and passes short pulses of radio waves through his or her body

Each pulse causes a responding pulse of radio waves to be emited by the patinet tissues

The location from which these signals originate and their strength are determined by a computer which produces a 2D picture of a section of the patient

Digital Image ProcessingDigital Image Processing

Week 1Week 1

MRI images of a human knee (left) and spine (right)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Images of the Crab Pulsar covering the electromagnetic spectrum

Gamma X-ray Optical Infrared Radio

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Other Imaging Modalities

acoustic imaging electron microscopy synthetic (computer-generated) imaging

Imaging using sound geological explorations industry medicine

Mineral and oil exploration

For image acquisition over land one of the main approaches is to use a large truck an a large flat steel plate The plate is pressed on the ground by the truck and the truck is vibratedthrough a frequency spectrum up to 100 Hz The strength and the speed of the returning sound waves are determined by the composition of the Earth below the surface These are analysed by a computer and images are generated from the resulting analysis

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - iris

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - fingerprint

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Face detection and recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gender identification

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image morphing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Fundamental Steps in DIP

methods whose input and output are images

methods whose inputs are images but whose outputs are attributes extracted from those images

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are images

bull image acquisition

bull image filtering and enhancement

bull image restoration

bull color image processing

bull wavelets and multiresolution processing

bull compression

bull morphological processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are attributes

bull morphological processing

bull segmentation

bull representation and description

bull object recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image acquisition - may involve preprocessing such as scaling

Image enhancement

bull manipulating an image so that the result is more suitable than the original for a specific operation

bull enhancement is problem oriented

bull there is no general sbquotheoryrsquo of image enhancement

bull enhancement use subjective methods for image emprovement

bull enhancement is based on human subjective preferences regarding what is a bdquogoodrdquo enhancement result

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image restoration

bull improving the appearance of an image

bull restoration is objective - the techniques for restoration are based on mathematical or probabilistic models of image degradation

Color image processing

bull fundamental concept in color models

bull basic color processing in a digital domain

Wavelets and multiresolution processing

representing images in various degree of resolution

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Compression

reducing the storage required to save an image or the bandwidth required to transmit it

Morphological processing

bull tools for extracting image components that are useful in the representation and description of shape

bull a transition from processes that output images to processes that outputimage attributes

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Segmentation

bull partitioning an image into its constituents parts or objects

bull autonomous segmentation is one of the most difficult tasks of DIP

bull the more accurate the segmentation the more likley recognition is to succeed

Representation and description (almost always follows segmentation)

bull segmentation produces either the boundary of a region or all the poits in the region itself

bull converting the data produced by segmentation to a form suitable for computer processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

bull boundary representation the focus is on external shape characteristics such as corners or inflections

bull complete region the focus is on internal properties such as texture or skeletal shape

bull description is also called feature extraction ndash extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another

Object recognition

the process of assigning a label (eg bdquovehiclerdquo) to an object based on itsdescriptors

Knowledge database

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Simplified diagramof a cross sectionof the human eye

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Three membranes enclose the eye the cornea and sclera outer cover the choroid the retina

The cornea is a tough transparent tissue that covers the anterior surface of the eye

Continuous with the cornea the sclera is an opaque membrane that enclosesthe remainder of the optic globe

The choroid lies directly below the sclera This membrane contains a network of blood vessels (major nutrition of the eye) The choroid is pigmented and help reduce the amount of light entering the eyeThe choroid is divided (at its anterior extreme) into the ciliary body and the irisThe iris contracts and expands to control the amount of light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to ciliary body (60-70 water 6 fat protein) The lens is colored in slightly yellow The lens absorbs approximatively 8 of the visiblelight (infrared and ultraviolet light are absorbed by proteins in lens)

The innermost membrane is the retina When the eye is proper focusedlight from an object outside the eye is imaged on the retinaVision is possible because of the distribution of discrete light receptors on the surface of the retina cones and rods (6-7 milion cones 75-150 milion rods)

Cones located in the central part of the retina (fovea) they are sensitive to colors vision of detail each cone is link to its own nervecone vision = photopic or bright-light vision

Fovea = the place where the image of the object of interest falls on

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Rods distributed over al the retina surface several rods are contected toa single nerve not specialized in detail vision serve to give a general overall picture of the filed of viewnot involved in color visionsensitive to low level of illumination

Blind spot region without receptors

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 16: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

loz geographers use DIP to study pollution patterns from aerial and satellite imagery

loz archeology ndash DIP allowed restoring blurred pictures that recorded rare artifacts lost or damaged after being photographed

loz physics ndash enhance images of experiments (high-energy plasmas electron microscopy)

loz astronomy biology nuclear medicine law enforcement industry

DIP ndash used in solving problems dealing with machine perception ndashextracting from an image information suitable for computer processing (statistical moments Fourier transform coefficients hellip)

loz automatic character recognition industrial machine vision for product assembly and inspection military recognizance automatic processing of fingerprints machine processing of aerial and satellite imagery for weather prediction Internet

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of Fields that Use DIP

Images can be classified according to their sources (visual X-ray hellip)

Energy sources for images electromagnetic energy spectrum acoustic ultrasonic electronic computer- generated

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Electromagnetic waves can be thought as propagating sinusoidal

waves of different wavelength or as a stream of massless particles

each moving in a wavelike pattern with the speed of light Each

massless particle contains a certain amount (bundle) of energy Each

bundle of energy is called a photon If spectral bands are grouped

according to energy per photon we obtain the spectrum shown in the

image above ranging from gamma-rays (highest energy) to radio

waves (lowest energy)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gamma-Ray Imaging

Nuclear medicine astronomical observations

Nuclear medicine

the approach is to inject a patient with a radioactive isotope that emits gamma rays as it decays

Images are produced from the emissions collected by gamma-ray detectors

Images of this sort are used to locate sites of bone pathology (infections tumors)

PET (positron emision tomography) ndash the patient is given a radioactive isotope that emits positrons as it decays

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of gamma-ray imaging

Bone scan PET image

Digital Image ProcessingDigital Image Processing

Week 1Week 1

X-ray imaging

Medical diagnosticindustry astronomy

A X-ray tube is a vacuum tube with a cathode and an anode The cathode is heated causing free electrons to be released The electrons flows at high speed to the positively charged anode When the electrons strike a nucleus energy is released in the form of a X-ray radiation The energy (penetreting power) of the X-rays is controlled by a voltage applied across the anode and by a curent applied to the filament in the cathode

The intensity of the X-rays is modified by absorbtion as they pass through the patient and the resulting energy falling develops it much in the same way that light develops photographic film

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Angiography = contrast-enhancement radiography

Angiograms = images of blood vessels

A catheter is inserted into an artery or vein in the groin The catheter is threaded into the blood vessel and guided to the area to be studied When it reaches the area to be studied a X-ray contrast medium is injected through the catheter This enhances contrast of the blood vessels and enables radiologist to see anyirregularities or blockages

X-rays are used in CAT (computerized axial tomography)

X-rays used in industrial processes (examine circuit boards for flows in manifacturing)

Industrial CAT scans are useful when the parts can be penetreted by X-rays

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of X-ray imaging

Chest X-rayAortic angiogram

Head CT Cygnus LoopCircuit boards

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Ultraviolet Band

Litography industrial inspection microscopy biological imaging astronomical observations

Ultraviolet light is used in fluorescence microscopy

Ultraviolet light is not visible to human eye but when a photon of ultraviolet radiation collides with an electron in an atom of a fluorescent material it elevates the electron to a higher energy level After that the electron relaxes to a lower level and emits light in the form of a lower-energy photon in the visible (red) light regionFluorescence = emission of light by a substance that has absorbed light or

other electromagentic radiation of a different wavelengthFlourescence microscope = uses an excitation light to irradiate a prepared specimen

and then it separates the much weaker radiating fluorescent light from the brighter excitation light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Visible and Infrared Bands

Light microscopy astronomy remote sensing industry law enforcement

LANDSAT satellite ndash obtained and transmitted images of the Earth from space for purpose of monitoring environmental conditions on the planet

Weather observations and prediction produce major applications of multispectral image from satellites

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Table 1 ndash Thematic bands in NASArsquos LANDSAT satellite

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Satellite images of Washington DC area in spectral bands of the Table 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of light microscopy

Taxol (anticancer agent)magnified 250X

Cholesterol(40X)

Microprocessor(60X)

Nickel oxidethin film(600X)

Surface of audio CD(1750X)

Organicsuperconductor(450X)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Automated visual inspection of manufactured goods

a bc de f

a ndash a circuit board controllerb ndash packaged pillesc ndash bottlesd ndash air bubbles in clear-plastic producte ndash cerealf ndash image of intraocular implant

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Microwave Band

The dominant aplication of imaging in the microwave band ndash radar

bull radar has the ability to collect data over virtually any region at any time regardless of weather or ambient light conditions

bull some radar waves can penetrate clouds under certain conditions canpenetrate vegetation ice dry sand

bull sometimes radar is the only way to explore inaccessible regions of theEarthrsquos surface

An imaging radar works like a flash camera it provides its own illumination(microwave pulses) to light an area on the ground and take a snapshot image Instead of a camera lens a radar uses an antenna and a digital device torecord the images In a radar image one can see only the microwave energythat was reflected back toward the radar antenna

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Spaceborne radar image of mountains in southeast Tibet

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Radio Band

medicine astronomy

MRI = Magnetic Resonance Imaging

This technique places the patient in a powerful magnet and passes short pulses of radio waves through his or her body

Each pulse causes a responding pulse of radio waves to be emited by the patinet tissues

The location from which these signals originate and their strength are determined by a computer which produces a 2D picture of a section of the patient

Digital Image ProcessingDigital Image Processing

Week 1Week 1

MRI images of a human knee (left) and spine (right)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Images of the Crab Pulsar covering the electromagnetic spectrum

Gamma X-ray Optical Infrared Radio

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Other Imaging Modalities

acoustic imaging electron microscopy synthetic (computer-generated) imaging

Imaging using sound geological explorations industry medicine

Mineral and oil exploration

For image acquisition over land one of the main approaches is to use a large truck an a large flat steel plate The plate is pressed on the ground by the truck and the truck is vibratedthrough a frequency spectrum up to 100 Hz The strength and the speed of the returning sound waves are determined by the composition of the Earth below the surface These are analysed by a computer and images are generated from the resulting analysis

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - iris

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - fingerprint

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Face detection and recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gender identification

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image morphing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Fundamental Steps in DIP

methods whose input and output are images

methods whose inputs are images but whose outputs are attributes extracted from those images

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are images

bull image acquisition

bull image filtering and enhancement

bull image restoration

bull color image processing

bull wavelets and multiresolution processing

bull compression

bull morphological processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are attributes

bull morphological processing

bull segmentation

bull representation and description

bull object recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image acquisition - may involve preprocessing such as scaling

Image enhancement

bull manipulating an image so that the result is more suitable than the original for a specific operation

bull enhancement is problem oriented

bull there is no general sbquotheoryrsquo of image enhancement

bull enhancement use subjective methods for image emprovement

bull enhancement is based on human subjective preferences regarding what is a bdquogoodrdquo enhancement result

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image restoration

bull improving the appearance of an image

bull restoration is objective - the techniques for restoration are based on mathematical or probabilistic models of image degradation

Color image processing

bull fundamental concept in color models

bull basic color processing in a digital domain

Wavelets and multiresolution processing

representing images in various degree of resolution

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Compression

reducing the storage required to save an image or the bandwidth required to transmit it

Morphological processing

bull tools for extracting image components that are useful in the representation and description of shape

bull a transition from processes that output images to processes that outputimage attributes

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Segmentation

bull partitioning an image into its constituents parts or objects

bull autonomous segmentation is one of the most difficult tasks of DIP

bull the more accurate the segmentation the more likley recognition is to succeed

Representation and description (almost always follows segmentation)

bull segmentation produces either the boundary of a region or all the poits in the region itself

bull converting the data produced by segmentation to a form suitable for computer processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

bull boundary representation the focus is on external shape characteristics such as corners or inflections

bull complete region the focus is on internal properties such as texture or skeletal shape

bull description is also called feature extraction ndash extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another

Object recognition

the process of assigning a label (eg bdquovehiclerdquo) to an object based on itsdescriptors

Knowledge database

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Simplified diagramof a cross sectionof the human eye

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Three membranes enclose the eye the cornea and sclera outer cover the choroid the retina

The cornea is a tough transparent tissue that covers the anterior surface of the eye

Continuous with the cornea the sclera is an opaque membrane that enclosesthe remainder of the optic globe

The choroid lies directly below the sclera This membrane contains a network of blood vessels (major nutrition of the eye) The choroid is pigmented and help reduce the amount of light entering the eyeThe choroid is divided (at its anterior extreme) into the ciliary body and the irisThe iris contracts and expands to control the amount of light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to ciliary body (60-70 water 6 fat protein) The lens is colored in slightly yellow The lens absorbs approximatively 8 of the visiblelight (infrared and ultraviolet light are absorbed by proteins in lens)

The innermost membrane is the retina When the eye is proper focusedlight from an object outside the eye is imaged on the retinaVision is possible because of the distribution of discrete light receptors on the surface of the retina cones and rods (6-7 milion cones 75-150 milion rods)

Cones located in the central part of the retina (fovea) they are sensitive to colors vision of detail each cone is link to its own nervecone vision = photopic or bright-light vision

Fovea = the place where the image of the object of interest falls on

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Rods distributed over al the retina surface several rods are contected toa single nerve not specialized in detail vision serve to give a general overall picture of the filed of viewnot involved in color visionsensitive to low level of illumination

Blind spot region without receptors

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 17: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of Fields that Use DIP

Images can be classified according to their sources (visual X-ray hellip)

Energy sources for images electromagnetic energy spectrum acoustic ultrasonic electronic computer- generated

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Electromagnetic waves can be thought as propagating sinusoidal

waves of different wavelength or as a stream of massless particles

each moving in a wavelike pattern with the speed of light Each

massless particle contains a certain amount (bundle) of energy Each

bundle of energy is called a photon If spectral bands are grouped

according to energy per photon we obtain the spectrum shown in the

image above ranging from gamma-rays (highest energy) to radio

waves (lowest energy)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gamma-Ray Imaging

Nuclear medicine astronomical observations

Nuclear medicine

the approach is to inject a patient with a radioactive isotope that emits gamma rays as it decays

Images are produced from the emissions collected by gamma-ray detectors

Images of this sort are used to locate sites of bone pathology (infections tumors)

PET (positron emision tomography) ndash the patient is given a radioactive isotope that emits positrons as it decays

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of gamma-ray imaging

Bone scan PET image

Digital Image ProcessingDigital Image Processing

Week 1Week 1

X-ray imaging

Medical diagnosticindustry astronomy

A X-ray tube is a vacuum tube with a cathode and an anode The cathode is heated causing free electrons to be released The electrons flows at high speed to the positively charged anode When the electrons strike a nucleus energy is released in the form of a X-ray radiation The energy (penetreting power) of the X-rays is controlled by a voltage applied across the anode and by a curent applied to the filament in the cathode

The intensity of the X-rays is modified by absorbtion as they pass through the patient and the resulting energy falling develops it much in the same way that light develops photographic film

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Angiography = contrast-enhancement radiography

Angiograms = images of blood vessels

A catheter is inserted into an artery or vein in the groin The catheter is threaded into the blood vessel and guided to the area to be studied When it reaches the area to be studied a X-ray contrast medium is injected through the catheter This enhances contrast of the blood vessels and enables radiologist to see anyirregularities or blockages

X-rays are used in CAT (computerized axial tomography)

X-rays used in industrial processes (examine circuit boards for flows in manifacturing)

Industrial CAT scans are useful when the parts can be penetreted by X-rays

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of X-ray imaging

Chest X-rayAortic angiogram

Head CT Cygnus LoopCircuit boards

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Ultraviolet Band

Litography industrial inspection microscopy biological imaging astronomical observations

Ultraviolet light is used in fluorescence microscopy

Ultraviolet light is not visible to human eye but when a photon of ultraviolet radiation collides with an electron in an atom of a fluorescent material it elevates the electron to a higher energy level After that the electron relaxes to a lower level and emits light in the form of a lower-energy photon in the visible (red) light regionFluorescence = emission of light by a substance that has absorbed light or

other electromagentic radiation of a different wavelengthFlourescence microscope = uses an excitation light to irradiate a prepared specimen

and then it separates the much weaker radiating fluorescent light from the brighter excitation light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Visible and Infrared Bands

Light microscopy astronomy remote sensing industry law enforcement

LANDSAT satellite ndash obtained and transmitted images of the Earth from space for purpose of monitoring environmental conditions on the planet

Weather observations and prediction produce major applications of multispectral image from satellites

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Table 1 ndash Thematic bands in NASArsquos LANDSAT satellite

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Satellite images of Washington DC area in spectral bands of the Table 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of light microscopy

Taxol (anticancer agent)magnified 250X

Cholesterol(40X)

Microprocessor(60X)

Nickel oxidethin film(600X)

Surface of audio CD(1750X)

Organicsuperconductor(450X)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Automated visual inspection of manufactured goods

a bc de f

a ndash a circuit board controllerb ndash packaged pillesc ndash bottlesd ndash air bubbles in clear-plastic producte ndash cerealf ndash image of intraocular implant

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Microwave Band

The dominant aplication of imaging in the microwave band ndash radar

bull radar has the ability to collect data over virtually any region at any time regardless of weather or ambient light conditions

bull some radar waves can penetrate clouds under certain conditions canpenetrate vegetation ice dry sand

bull sometimes radar is the only way to explore inaccessible regions of theEarthrsquos surface

An imaging radar works like a flash camera it provides its own illumination(microwave pulses) to light an area on the ground and take a snapshot image Instead of a camera lens a radar uses an antenna and a digital device torecord the images In a radar image one can see only the microwave energythat was reflected back toward the radar antenna

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Spaceborne radar image of mountains in southeast Tibet

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Radio Band

medicine astronomy

MRI = Magnetic Resonance Imaging

This technique places the patient in a powerful magnet and passes short pulses of radio waves through his or her body

Each pulse causes a responding pulse of radio waves to be emited by the patinet tissues

The location from which these signals originate and their strength are determined by a computer which produces a 2D picture of a section of the patient

Digital Image ProcessingDigital Image Processing

Week 1Week 1

MRI images of a human knee (left) and spine (right)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Images of the Crab Pulsar covering the electromagnetic spectrum

Gamma X-ray Optical Infrared Radio

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Other Imaging Modalities

acoustic imaging electron microscopy synthetic (computer-generated) imaging

Imaging using sound geological explorations industry medicine

Mineral and oil exploration

For image acquisition over land one of the main approaches is to use a large truck an a large flat steel plate The plate is pressed on the ground by the truck and the truck is vibratedthrough a frequency spectrum up to 100 Hz The strength and the speed of the returning sound waves are determined by the composition of the Earth below the surface These are analysed by a computer and images are generated from the resulting analysis

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - iris

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - fingerprint

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Face detection and recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gender identification

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image morphing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Fundamental Steps in DIP

methods whose input and output are images

methods whose inputs are images but whose outputs are attributes extracted from those images

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are images

bull image acquisition

bull image filtering and enhancement

bull image restoration

bull color image processing

bull wavelets and multiresolution processing

bull compression

bull morphological processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are attributes

bull morphological processing

bull segmentation

bull representation and description

bull object recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image acquisition - may involve preprocessing such as scaling

Image enhancement

bull manipulating an image so that the result is more suitable than the original for a specific operation

bull enhancement is problem oriented

bull there is no general sbquotheoryrsquo of image enhancement

bull enhancement use subjective methods for image emprovement

bull enhancement is based on human subjective preferences regarding what is a bdquogoodrdquo enhancement result

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image restoration

bull improving the appearance of an image

bull restoration is objective - the techniques for restoration are based on mathematical or probabilistic models of image degradation

Color image processing

bull fundamental concept in color models

bull basic color processing in a digital domain

Wavelets and multiresolution processing

representing images in various degree of resolution

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Compression

reducing the storage required to save an image or the bandwidth required to transmit it

Morphological processing

bull tools for extracting image components that are useful in the representation and description of shape

bull a transition from processes that output images to processes that outputimage attributes

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Segmentation

bull partitioning an image into its constituents parts or objects

bull autonomous segmentation is one of the most difficult tasks of DIP

bull the more accurate the segmentation the more likley recognition is to succeed

Representation and description (almost always follows segmentation)

bull segmentation produces either the boundary of a region or all the poits in the region itself

bull converting the data produced by segmentation to a form suitable for computer processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

bull boundary representation the focus is on external shape characteristics such as corners or inflections

bull complete region the focus is on internal properties such as texture or skeletal shape

bull description is also called feature extraction ndash extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another

Object recognition

the process of assigning a label (eg bdquovehiclerdquo) to an object based on itsdescriptors

Knowledge database

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Simplified diagramof a cross sectionof the human eye

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Three membranes enclose the eye the cornea and sclera outer cover the choroid the retina

The cornea is a tough transparent tissue that covers the anterior surface of the eye

Continuous with the cornea the sclera is an opaque membrane that enclosesthe remainder of the optic globe

The choroid lies directly below the sclera This membrane contains a network of blood vessels (major nutrition of the eye) The choroid is pigmented and help reduce the amount of light entering the eyeThe choroid is divided (at its anterior extreme) into the ciliary body and the irisThe iris contracts and expands to control the amount of light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to ciliary body (60-70 water 6 fat protein) The lens is colored in slightly yellow The lens absorbs approximatively 8 of the visiblelight (infrared and ultraviolet light are absorbed by proteins in lens)

The innermost membrane is the retina When the eye is proper focusedlight from an object outside the eye is imaged on the retinaVision is possible because of the distribution of discrete light receptors on the surface of the retina cones and rods (6-7 milion cones 75-150 milion rods)

Cones located in the central part of the retina (fovea) they are sensitive to colors vision of detail each cone is link to its own nervecone vision = photopic or bright-light vision

Fovea = the place where the image of the object of interest falls on

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Rods distributed over al the retina surface several rods are contected toa single nerve not specialized in detail vision serve to give a general overall picture of the filed of viewnot involved in color visionsensitive to low level of illumination

Blind spot region without receptors

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 18: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Electromagnetic waves can be thought as propagating sinusoidal

waves of different wavelength or as a stream of massless particles

each moving in a wavelike pattern with the speed of light Each

massless particle contains a certain amount (bundle) of energy Each

bundle of energy is called a photon If spectral bands are grouped

according to energy per photon we obtain the spectrum shown in the

image above ranging from gamma-rays (highest energy) to radio

waves (lowest energy)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gamma-Ray Imaging

Nuclear medicine astronomical observations

Nuclear medicine

the approach is to inject a patient with a radioactive isotope that emits gamma rays as it decays

Images are produced from the emissions collected by gamma-ray detectors

Images of this sort are used to locate sites of bone pathology (infections tumors)

PET (positron emision tomography) ndash the patient is given a radioactive isotope that emits positrons as it decays

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of gamma-ray imaging

Bone scan PET image

Digital Image ProcessingDigital Image Processing

Week 1Week 1

X-ray imaging

Medical diagnosticindustry astronomy

A X-ray tube is a vacuum tube with a cathode and an anode The cathode is heated causing free electrons to be released The electrons flows at high speed to the positively charged anode When the electrons strike a nucleus energy is released in the form of a X-ray radiation The energy (penetreting power) of the X-rays is controlled by a voltage applied across the anode and by a curent applied to the filament in the cathode

The intensity of the X-rays is modified by absorbtion as they pass through the patient and the resulting energy falling develops it much in the same way that light develops photographic film

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Angiography = contrast-enhancement radiography

Angiograms = images of blood vessels

A catheter is inserted into an artery or vein in the groin The catheter is threaded into the blood vessel and guided to the area to be studied When it reaches the area to be studied a X-ray contrast medium is injected through the catheter This enhances contrast of the blood vessels and enables radiologist to see anyirregularities or blockages

X-rays are used in CAT (computerized axial tomography)

X-rays used in industrial processes (examine circuit boards for flows in manifacturing)

Industrial CAT scans are useful when the parts can be penetreted by X-rays

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of X-ray imaging

Chest X-rayAortic angiogram

Head CT Cygnus LoopCircuit boards

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Ultraviolet Band

Litography industrial inspection microscopy biological imaging astronomical observations

Ultraviolet light is used in fluorescence microscopy

Ultraviolet light is not visible to human eye but when a photon of ultraviolet radiation collides with an electron in an atom of a fluorescent material it elevates the electron to a higher energy level After that the electron relaxes to a lower level and emits light in the form of a lower-energy photon in the visible (red) light regionFluorescence = emission of light by a substance that has absorbed light or

other electromagentic radiation of a different wavelengthFlourescence microscope = uses an excitation light to irradiate a prepared specimen

and then it separates the much weaker radiating fluorescent light from the brighter excitation light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Visible and Infrared Bands

Light microscopy astronomy remote sensing industry law enforcement

LANDSAT satellite ndash obtained and transmitted images of the Earth from space for purpose of monitoring environmental conditions on the planet

Weather observations and prediction produce major applications of multispectral image from satellites

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Table 1 ndash Thematic bands in NASArsquos LANDSAT satellite

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Satellite images of Washington DC area in spectral bands of the Table 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of light microscopy

Taxol (anticancer agent)magnified 250X

Cholesterol(40X)

Microprocessor(60X)

Nickel oxidethin film(600X)

Surface of audio CD(1750X)

Organicsuperconductor(450X)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Automated visual inspection of manufactured goods

a bc de f

a ndash a circuit board controllerb ndash packaged pillesc ndash bottlesd ndash air bubbles in clear-plastic producte ndash cerealf ndash image of intraocular implant

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Microwave Band

The dominant aplication of imaging in the microwave band ndash radar

bull radar has the ability to collect data over virtually any region at any time regardless of weather or ambient light conditions

bull some radar waves can penetrate clouds under certain conditions canpenetrate vegetation ice dry sand

bull sometimes radar is the only way to explore inaccessible regions of theEarthrsquos surface

An imaging radar works like a flash camera it provides its own illumination(microwave pulses) to light an area on the ground and take a snapshot image Instead of a camera lens a radar uses an antenna and a digital device torecord the images In a radar image one can see only the microwave energythat was reflected back toward the radar antenna

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Spaceborne radar image of mountains in southeast Tibet

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Radio Band

medicine astronomy

MRI = Magnetic Resonance Imaging

This technique places the patient in a powerful magnet and passes short pulses of radio waves through his or her body

Each pulse causes a responding pulse of radio waves to be emited by the patinet tissues

The location from which these signals originate and their strength are determined by a computer which produces a 2D picture of a section of the patient

Digital Image ProcessingDigital Image Processing

Week 1Week 1

MRI images of a human knee (left) and spine (right)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Images of the Crab Pulsar covering the electromagnetic spectrum

Gamma X-ray Optical Infrared Radio

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Other Imaging Modalities

acoustic imaging electron microscopy synthetic (computer-generated) imaging

Imaging using sound geological explorations industry medicine

Mineral and oil exploration

For image acquisition over land one of the main approaches is to use a large truck an a large flat steel plate The plate is pressed on the ground by the truck and the truck is vibratedthrough a frequency spectrum up to 100 Hz The strength and the speed of the returning sound waves are determined by the composition of the Earth below the surface These are analysed by a computer and images are generated from the resulting analysis

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - iris

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - fingerprint

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Face detection and recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gender identification

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image morphing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Fundamental Steps in DIP

methods whose input and output are images

methods whose inputs are images but whose outputs are attributes extracted from those images

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are images

bull image acquisition

bull image filtering and enhancement

bull image restoration

bull color image processing

bull wavelets and multiresolution processing

bull compression

bull morphological processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are attributes

bull morphological processing

bull segmentation

bull representation and description

bull object recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image acquisition - may involve preprocessing such as scaling

Image enhancement

bull manipulating an image so that the result is more suitable than the original for a specific operation

bull enhancement is problem oriented

bull there is no general sbquotheoryrsquo of image enhancement

bull enhancement use subjective methods for image emprovement

bull enhancement is based on human subjective preferences regarding what is a bdquogoodrdquo enhancement result

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image restoration

bull improving the appearance of an image

bull restoration is objective - the techniques for restoration are based on mathematical or probabilistic models of image degradation

Color image processing

bull fundamental concept in color models

bull basic color processing in a digital domain

Wavelets and multiresolution processing

representing images in various degree of resolution

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Compression

reducing the storage required to save an image or the bandwidth required to transmit it

Morphological processing

bull tools for extracting image components that are useful in the representation and description of shape

bull a transition from processes that output images to processes that outputimage attributes

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Segmentation

bull partitioning an image into its constituents parts or objects

bull autonomous segmentation is one of the most difficult tasks of DIP

bull the more accurate the segmentation the more likley recognition is to succeed

Representation and description (almost always follows segmentation)

bull segmentation produces either the boundary of a region or all the poits in the region itself

bull converting the data produced by segmentation to a form suitable for computer processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

bull boundary representation the focus is on external shape characteristics such as corners or inflections

bull complete region the focus is on internal properties such as texture or skeletal shape

bull description is also called feature extraction ndash extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another

Object recognition

the process of assigning a label (eg bdquovehiclerdquo) to an object based on itsdescriptors

Knowledge database

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Simplified diagramof a cross sectionof the human eye

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Three membranes enclose the eye the cornea and sclera outer cover the choroid the retina

The cornea is a tough transparent tissue that covers the anterior surface of the eye

Continuous with the cornea the sclera is an opaque membrane that enclosesthe remainder of the optic globe

The choroid lies directly below the sclera This membrane contains a network of blood vessels (major nutrition of the eye) The choroid is pigmented and help reduce the amount of light entering the eyeThe choroid is divided (at its anterior extreme) into the ciliary body and the irisThe iris contracts and expands to control the amount of light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to ciliary body (60-70 water 6 fat protein) The lens is colored in slightly yellow The lens absorbs approximatively 8 of the visiblelight (infrared and ultraviolet light are absorbed by proteins in lens)

The innermost membrane is the retina When the eye is proper focusedlight from an object outside the eye is imaged on the retinaVision is possible because of the distribution of discrete light receptors on the surface of the retina cones and rods (6-7 milion cones 75-150 milion rods)

Cones located in the central part of the retina (fovea) they are sensitive to colors vision of detail each cone is link to its own nervecone vision = photopic or bright-light vision

Fovea = the place where the image of the object of interest falls on

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Rods distributed over al the retina surface several rods are contected toa single nerve not specialized in detail vision serve to give a general overall picture of the filed of viewnot involved in color visionsensitive to low level of illumination

Blind spot region without receptors

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 19: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gamma-Ray Imaging

Nuclear medicine astronomical observations

Nuclear medicine

the approach is to inject a patient with a radioactive isotope that emits gamma rays as it decays

Images are produced from the emissions collected by gamma-ray detectors

Images of this sort are used to locate sites of bone pathology (infections tumors)

PET (positron emision tomography) ndash the patient is given a radioactive isotope that emits positrons as it decays

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of gamma-ray imaging

Bone scan PET image

Digital Image ProcessingDigital Image Processing

Week 1Week 1

X-ray imaging

Medical diagnosticindustry astronomy

A X-ray tube is a vacuum tube with a cathode and an anode The cathode is heated causing free electrons to be released The electrons flows at high speed to the positively charged anode When the electrons strike a nucleus energy is released in the form of a X-ray radiation The energy (penetreting power) of the X-rays is controlled by a voltage applied across the anode and by a curent applied to the filament in the cathode

The intensity of the X-rays is modified by absorbtion as they pass through the patient and the resulting energy falling develops it much in the same way that light develops photographic film

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Angiography = contrast-enhancement radiography

Angiograms = images of blood vessels

A catheter is inserted into an artery or vein in the groin The catheter is threaded into the blood vessel and guided to the area to be studied When it reaches the area to be studied a X-ray contrast medium is injected through the catheter This enhances contrast of the blood vessels and enables radiologist to see anyirregularities or blockages

X-rays are used in CAT (computerized axial tomography)

X-rays used in industrial processes (examine circuit boards for flows in manifacturing)

Industrial CAT scans are useful when the parts can be penetreted by X-rays

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of X-ray imaging

Chest X-rayAortic angiogram

Head CT Cygnus LoopCircuit boards

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Ultraviolet Band

Litography industrial inspection microscopy biological imaging astronomical observations

Ultraviolet light is used in fluorescence microscopy

Ultraviolet light is not visible to human eye but when a photon of ultraviolet radiation collides with an electron in an atom of a fluorescent material it elevates the electron to a higher energy level After that the electron relaxes to a lower level and emits light in the form of a lower-energy photon in the visible (red) light regionFluorescence = emission of light by a substance that has absorbed light or

other electromagentic radiation of a different wavelengthFlourescence microscope = uses an excitation light to irradiate a prepared specimen

and then it separates the much weaker radiating fluorescent light from the brighter excitation light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Visible and Infrared Bands

Light microscopy astronomy remote sensing industry law enforcement

LANDSAT satellite ndash obtained and transmitted images of the Earth from space for purpose of monitoring environmental conditions on the planet

Weather observations and prediction produce major applications of multispectral image from satellites

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Table 1 ndash Thematic bands in NASArsquos LANDSAT satellite

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Satellite images of Washington DC area in spectral bands of the Table 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of light microscopy

Taxol (anticancer agent)magnified 250X

Cholesterol(40X)

Microprocessor(60X)

Nickel oxidethin film(600X)

Surface of audio CD(1750X)

Organicsuperconductor(450X)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Automated visual inspection of manufactured goods

a bc de f

a ndash a circuit board controllerb ndash packaged pillesc ndash bottlesd ndash air bubbles in clear-plastic producte ndash cerealf ndash image of intraocular implant

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Microwave Band

The dominant aplication of imaging in the microwave band ndash radar

bull radar has the ability to collect data over virtually any region at any time regardless of weather or ambient light conditions

bull some radar waves can penetrate clouds under certain conditions canpenetrate vegetation ice dry sand

bull sometimes radar is the only way to explore inaccessible regions of theEarthrsquos surface

An imaging radar works like a flash camera it provides its own illumination(microwave pulses) to light an area on the ground and take a snapshot image Instead of a camera lens a radar uses an antenna and a digital device torecord the images In a radar image one can see only the microwave energythat was reflected back toward the radar antenna

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Spaceborne radar image of mountains in southeast Tibet

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Radio Band

medicine astronomy

MRI = Magnetic Resonance Imaging

This technique places the patient in a powerful magnet and passes short pulses of radio waves through his or her body

Each pulse causes a responding pulse of radio waves to be emited by the patinet tissues

The location from which these signals originate and their strength are determined by a computer which produces a 2D picture of a section of the patient

Digital Image ProcessingDigital Image Processing

Week 1Week 1

MRI images of a human knee (left) and spine (right)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Images of the Crab Pulsar covering the electromagnetic spectrum

Gamma X-ray Optical Infrared Radio

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Other Imaging Modalities

acoustic imaging electron microscopy synthetic (computer-generated) imaging

Imaging using sound geological explorations industry medicine

Mineral and oil exploration

For image acquisition over land one of the main approaches is to use a large truck an a large flat steel plate The plate is pressed on the ground by the truck and the truck is vibratedthrough a frequency spectrum up to 100 Hz The strength and the speed of the returning sound waves are determined by the composition of the Earth below the surface These are analysed by a computer and images are generated from the resulting analysis

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - iris

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - fingerprint

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Face detection and recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gender identification

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image morphing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Fundamental Steps in DIP

methods whose input and output are images

methods whose inputs are images but whose outputs are attributes extracted from those images

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are images

bull image acquisition

bull image filtering and enhancement

bull image restoration

bull color image processing

bull wavelets and multiresolution processing

bull compression

bull morphological processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are attributes

bull morphological processing

bull segmentation

bull representation and description

bull object recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image acquisition - may involve preprocessing such as scaling

Image enhancement

bull manipulating an image so that the result is more suitable than the original for a specific operation

bull enhancement is problem oriented

bull there is no general sbquotheoryrsquo of image enhancement

bull enhancement use subjective methods for image emprovement

bull enhancement is based on human subjective preferences regarding what is a bdquogoodrdquo enhancement result

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image restoration

bull improving the appearance of an image

bull restoration is objective - the techniques for restoration are based on mathematical or probabilistic models of image degradation

Color image processing

bull fundamental concept in color models

bull basic color processing in a digital domain

Wavelets and multiresolution processing

representing images in various degree of resolution

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Compression

reducing the storage required to save an image or the bandwidth required to transmit it

Morphological processing

bull tools for extracting image components that are useful in the representation and description of shape

bull a transition from processes that output images to processes that outputimage attributes

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Segmentation

bull partitioning an image into its constituents parts or objects

bull autonomous segmentation is one of the most difficult tasks of DIP

bull the more accurate the segmentation the more likley recognition is to succeed

Representation and description (almost always follows segmentation)

bull segmentation produces either the boundary of a region or all the poits in the region itself

bull converting the data produced by segmentation to a form suitable for computer processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

bull boundary representation the focus is on external shape characteristics such as corners or inflections

bull complete region the focus is on internal properties such as texture or skeletal shape

bull description is also called feature extraction ndash extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another

Object recognition

the process of assigning a label (eg bdquovehiclerdquo) to an object based on itsdescriptors

Knowledge database

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Simplified diagramof a cross sectionof the human eye

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Three membranes enclose the eye the cornea and sclera outer cover the choroid the retina

The cornea is a tough transparent tissue that covers the anterior surface of the eye

Continuous with the cornea the sclera is an opaque membrane that enclosesthe remainder of the optic globe

The choroid lies directly below the sclera This membrane contains a network of blood vessels (major nutrition of the eye) The choroid is pigmented and help reduce the amount of light entering the eyeThe choroid is divided (at its anterior extreme) into the ciliary body and the irisThe iris contracts and expands to control the amount of light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to ciliary body (60-70 water 6 fat protein) The lens is colored in slightly yellow The lens absorbs approximatively 8 of the visiblelight (infrared and ultraviolet light are absorbed by proteins in lens)

The innermost membrane is the retina When the eye is proper focusedlight from an object outside the eye is imaged on the retinaVision is possible because of the distribution of discrete light receptors on the surface of the retina cones and rods (6-7 milion cones 75-150 milion rods)

Cones located in the central part of the retina (fovea) they are sensitive to colors vision of detail each cone is link to its own nervecone vision = photopic or bright-light vision

Fovea = the place where the image of the object of interest falls on

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Rods distributed over al the retina surface several rods are contected toa single nerve not specialized in detail vision serve to give a general overall picture of the filed of viewnot involved in color visionsensitive to low level of illumination

Blind spot region without receptors

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 20: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gamma-Ray Imaging

Nuclear medicine astronomical observations

Nuclear medicine

the approach is to inject a patient with a radioactive isotope that emits gamma rays as it decays

Images are produced from the emissions collected by gamma-ray detectors

Images of this sort are used to locate sites of bone pathology (infections tumors)

PET (positron emision tomography) ndash the patient is given a radioactive isotope that emits positrons as it decays

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of gamma-ray imaging

Bone scan PET image

Digital Image ProcessingDigital Image Processing

Week 1Week 1

X-ray imaging

Medical diagnosticindustry astronomy

A X-ray tube is a vacuum tube with a cathode and an anode The cathode is heated causing free electrons to be released The electrons flows at high speed to the positively charged anode When the electrons strike a nucleus energy is released in the form of a X-ray radiation The energy (penetreting power) of the X-rays is controlled by a voltage applied across the anode and by a curent applied to the filament in the cathode

The intensity of the X-rays is modified by absorbtion as they pass through the patient and the resulting energy falling develops it much in the same way that light develops photographic film

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Angiography = contrast-enhancement radiography

Angiograms = images of blood vessels

A catheter is inserted into an artery or vein in the groin The catheter is threaded into the blood vessel and guided to the area to be studied When it reaches the area to be studied a X-ray contrast medium is injected through the catheter This enhances contrast of the blood vessels and enables radiologist to see anyirregularities or blockages

X-rays are used in CAT (computerized axial tomography)

X-rays used in industrial processes (examine circuit boards for flows in manifacturing)

Industrial CAT scans are useful when the parts can be penetreted by X-rays

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of X-ray imaging

Chest X-rayAortic angiogram

Head CT Cygnus LoopCircuit boards

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Ultraviolet Band

Litography industrial inspection microscopy biological imaging astronomical observations

Ultraviolet light is used in fluorescence microscopy

Ultraviolet light is not visible to human eye but when a photon of ultraviolet radiation collides with an electron in an atom of a fluorescent material it elevates the electron to a higher energy level After that the electron relaxes to a lower level and emits light in the form of a lower-energy photon in the visible (red) light regionFluorescence = emission of light by a substance that has absorbed light or

other electromagentic radiation of a different wavelengthFlourescence microscope = uses an excitation light to irradiate a prepared specimen

and then it separates the much weaker radiating fluorescent light from the brighter excitation light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Visible and Infrared Bands

Light microscopy astronomy remote sensing industry law enforcement

LANDSAT satellite ndash obtained and transmitted images of the Earth from space for purpose of monitoring environmental conditions on the planet

Weather observations and prediction produce major applications of multispectral image from satellites

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Table 1 ndash Thematic bands in NASArsquos LANDSAT satellite

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Satellite images of Washington DC area in spectral bands of the Table 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of light microscopy

Taxol (anticancer agent)magnified 250X

Cholesterol(40X)

Microprocessor(60X)

Nickel oxidethin film(600X)

Surface of audio CD(1750X)

Organicsuperconductor(450X)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Automated visual inspection of manufactured goods

a bc de f

a ndash a circuit board controllerb ndash packaged pillesc ndash bottlesd ndash air bubbles in clear-plastic producte ndash cerealf ndash image of intraocular implant

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Microwave Band

The dominant aplication of imaging in the microwave band ndash radar

bull radar has the ability to collect data over virtually any region at any time regardless of weather or ambient light conditions

bull some radar waves can penetrate clouds under certain conditions canpenetrate vegetation ice dry sand

bull sometimes radar is the only way to explore inaccessible regions of theEarthrsquos surface

An imaging radar works like a flash camera it provides its own illumination(microwave pulses) to light an area on the ground and take a snapshot image Instead of a camera lens a radar uses an antenna and a digital device torecord the images In a radar image one can see only the microwave energythat was reflected back toward the radar antenna

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Spaceborne radar image of mountains in southeast Tibet

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Radio Band

medicine astronomy

MRI = Magnetic Resonance Imaging

This technique places the patient in a powerful magnet and passes short pulses of radio waves through his or her body

Each pulse causes a responding pulse of radio waves to be emited by the patinet tissues

The location from which these signals originate and their strength are determined by a computer which produces a 2D picture of a section of the patient

Digital Image ProcessingDigital Image Processing

Week 1Week 1

MRI images of a human knee (left) and spine (right)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Images of the Crab Pulsar covering the electromagnetic spectrum

Gamma X-ray Optical Infrared Radio

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Other Imaging Modalities

acoustic imaging electron microscopy synthetic (computer-generated) imaging

Imaging using sound geological explorations industry medicine

Mineral and oil exploration

For image acquisition over land one of the main approaches is to use a large truck an a large flat steel plate The plate is pressed on the ground by the truck and the truck is vibratedthrough a frequency spectrum up to 100 Hz The strength and the speed of the returning sound waves are determined by the composition of the Earth below the surface These are analysed by a computer and images are generated from the resulting analysis

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - iris

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - fingerprint

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Face detection and recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gender identification

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image morphing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Fundamental Steps in DIP

methods whose input and output are images

methods whose inputs are images but whose outputs are attributes extracted from those images

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are images

bull image acquisition

bull image filtering and enhancement

bull image restoration

bull color image processing

bull wavelets and multiresolution processing

bull compression

bull morphological processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are attributes

bull morphological processing

bull segmentation

bull representation and description

bull object recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image acquisition - may involve preprocessing such as scaling

Image enhancement

bull manipulating an image so that the result is more suitable than the original for a specific operation

bull enhancement is problem oriented

bull there is no general sbquotheoryrsquo of image enhancement

bull enhancement use subjective methods for image emprovement

bull enhancement is based on human subjective preferences regarding what is a bdquogoodrdquo enhancement result

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image restoration

bull improving the appearance of an image

bull restoration is objective - the techniques for restoration are based on mathematical or probabilistic models of image degradation

Color image processing

bull fundamental concept in color models

bull basic color processing in a digital domain

Wavelets and multiresolution processing

representing images in various degree of resolution

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Compression

reducing the storage required to save an image or the bandwidth required to transmit it

Morphological processing

bull tools for extracting image components that are useful in the representation and description of shape

bull a transition from processes that output images to processes that outputimage attributes

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Segmentation

bull partitioning an image into its constituents parts or objects

bull autonomous segmentation is one of the most difficult tasks of DIP

bull the more accurate the segmentation the more likley recognition is to succeed

Representation and description (almost always follows segmentation)

bull segmentation produces either the boundary of a region or all the poits in the region itself

bull converting the data produced by segmentation to a form suitable for computer processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

bull boundary representation the focus is on external shape characteristics such as corners or inflections

bull complete region the focus is on internal properties such as texture or skeletal shape

bull description is also called feature extraction ndash extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another

Object recognition

the process of assigning a label (eg bdquovehiclerdquo) to an object based on itsdescriptors

Knowledge database

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Simplified diagramof a cross sectionof the human eye

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Three membranes enclose the eye the cornea and sclera outer cover the choroid the retina

The cornea is a tough transparent tissue that covers the anterior surface of the eye

Continuous with the cornea the sclera is an opaque membrane that enclosesthe remainder of the optic globe

The choroid lies directly below the sclera This membrane contains a network of blood vessels (major nutrition of the eye) The choroid is pigmented and help reduce the amount of light entering the eyeThe choroid is divided (at its anterior extreme) into the ciliary body and the irisThe iris contracts and expands to control the amount of light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to ciliary body (60-70 water 6 fat protein) The lens is colored in slightly yellow The lens absorbs approximatively 8 of the visiblelight (infrared and ultraviolet light are absorbed by proteins in lens)

The innermost membrane is the retina When the eye is proper focusedlight from an object outside the eye is imaged on the retinaVision is possible because of the distribution of discrete light receptors on the surface of the retina cones and rods (6-7 milion cones 75-150 milion rods)

Cones located in the central part of the retina (fovea) they are sensitive to colors vision of detail each cone is link to its own nervecone vision = photopic or bright-light vision

Fovea = the place where the image of the object of interest falls on

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Rods distributed over al the retina surface several rods are contected toa single nerve not specialized in detail vision serve to give a general overall picture of the filed of viewnot involved in color visionsensitive to low level of illumination

Blind spot region without receptors

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 21: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of gamma-ray imaging

Bone scan PET image

Digital Image ProcessingDigital Image Processing

Week 1Week 1

X-ray imaging

Medical diagnosticindustry astronomy

A X-ray tube is a vacuum tube with a cathode and an anode The cathode is heated causing free electrons to be released The electrons flows at high speed to the positively charged anode When the electrons strike a nucleus energy is released in the form of a X-ray radiation The energy (penetreting power) of the X-rays is controlled by a voltage applied across the anode and by a curent applied to the filament in the cathode

The intensity of the X-rays is modified by absorbtion as they pass through the patient and the resulting energy falling develops it much in the same way that light develops photographic film

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Angiography = contrast-enhancement radiography

Angiograms = images of blood vessels

A catheter is inserted into an artery or vein in the groin The catheter is threaded into the blood vessel and guided to the area to be studied When it reaches the area to be studied a X-ray contrast medium is injected through the catheter This enhances contrast of the blood vessels and enables radiologist to see anyirregularities or blockages

X-rays are used in CAT (computerized axial tomography)

X-rays used in industrial processes (examine circuit boards for flows in manifacturing)

Industrial CAT scans are useful when the parts can be penetreted by X-rays

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of X-ray imaging

Chest X-rayAortic angiogram

Head CT Cygnus LoopCircuit boards

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Ultraviolet Band

Litography industrial inspection microscopy biological imaging astronomical observations

Ultraviolet light is used in fluorescence microscopy

Ultraviolet light is not visible to human eye but when a photon of ultraviolet radiation collides with an electron in an atom of a fluorescent material it elevates the electron to a higher energy level After that the electron relaxes to a lower level and emits light in the form of a lower-energy photon in the visible (red) light regionFluorescence = emission of light by a substance that has absorbed light or

other electromagentic radiation of a different wavelengthFlourescence microscope = uses an excitation light to irradiate a prepared specimen

and then it separates the much weaker radiating fluorescent light from the brighter excitation light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Visible and Infrared Bands

Light microscopy astronomy remote sensing industry law enforcement

LANDSAT satellite ndash obtained and transmitted images of the Earth from space for purpose of monitoring environmental conditions on the planet

Weather observations and prediction produce major applications of multispectral image from satellites

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Table 1 ndash Thematic bands in NASArsquos LANDSAT satellite

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Satellite images of Washington DC area in spectral bands of the Table 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of light microscopy

Taxol (anticancer agent)magnified 250X

Cholesterol(40X)

Microprocessor(60X)

Nickel oxidethin film(600X)

Surface of audio CD(1750X)

Organicsuperconductor(450X)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Automated visual inspection of manufactured goods

a bc de f

a ndash a circuit board controllerb ndash packaged pillesc ndash bottlesd ndash air bubbles in clear-plastic producte ndash cerealf ndash image of intraocular implant

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Microwave Band

The dominant aplication of imaging in the microwave band ndash radar

bull radar has the ability to collect data over virtually any region at any time regardless of weather or ambient light conditions

bull some radar waves can penetrate clouds under certain conditions canpenetrate vegetation ice dry sand

bull sometimes radar is the only way to explore inaccessible regions of theEarthrsquos surface

An imaging radar works like a flash camera it provides its own illumination(microwave pulses) to light an area on the ground and take a snapshot image Instead of a camera lens a radar uses an antenna and a digital device torecord the images In a radar image one can see only the microwave energythat was reflected back toward the radar antenna

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Spaceborne radar image of mountains in southeast Tibet

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Radio Band

medicine astronomy

MRI = Magnetic Resonance Imaging

This technique places the patient in a powerful magnet and passes short pulses of radio waves through his or her body

Each pulse causes a responding pulse of radio waves to be emited by the patinet tissues

The location from which these signals originate and their strength are determined by a computer which produces a 2D picture of a section of the patient

Digital Image ProcessingDigital Image Processing

Week 1Week 1

MRI images of a human knee (left) and spine (right)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Images of the Crab Pulsar covering the electromagnetic spectrum

Gamma X-ray Optical Infrared Radio

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Other Imaging Modalities

acoustic imaging electron microscopy synthetic (computer-generated) imaging

Imaging using sound geological explorations industry medicine

Mineral and oil exploration

For image acquisition over land one of the main approaches is to use a large truck an a large flat steel plate The plate is pressed on the ground by the truck and the truck is vibratedthrough a frequency spectrum up to 100 Hz The strength and the speed of the returning sound waves are determined by the composition of the Earth below the surface These are analysed by a computer and images are generated from the resulting analysis

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - iris

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - fingerprint

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Face detection and recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gender identification

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image morphing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Fundamental Steps in DIP

methods whose input and output are images

methods whose inputs are images but whose outputs are attributes extracted from those images

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are images

bull image acquisition

bull image filtering and enhancement

bull image restoration

bull color image processing

bull wavelets and multiresolution processing

bull compression

bull morphological processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are attributes

bull morphological processing

bull segmentation

bull representation and description

bull object recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image acquisition - may involve preprocessing such as scaling

Image enhancement

bull manipulating an image so that the result is more suitable than the original for a specific operation

bull enhancement is problem oriented

bull there is no general sbquotheoryrsquo of image enhancement

bull enhancement use subjective methods for image emprovement

bull enhancement is based on human subjective preferences regarding what is a bdquogoodrdquo enhancement result

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image restoration

bull improving the appearance of an image

bull restoration is objective - the techniques for restoration are based on mathematical or probabilistic models of image degradation

Color image processing

bull fundamental concept in color models

bull basic color processing in a digital domain

Wavelets and multiresolution processing

representing images in various degree of resolution

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Compression

reducing the storage required to save an image or the bandwidth required to transmit it

Morphological processing

bull tools for extracting image components that are useful in the representation and description of shape

bull a transition from processes that output images to processes that outputimage attributes

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Segmentation

bull partitioning an image into its constituents parts or objects

bull autonomous segmentation is one of the most difficult tasks of DIP

bull the more accurate the segmentation the more likley recognition is to succeed

Representation and description (almost always follows segmentation)

bull segmentation produces either the boundary of a region or all the poits in the region itself

bull converting the data produced by segmentation to a form suitable for computer processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

bull boundary representation the focus is on external shape characteristics such as corners or inflections

bull complete region the focus is on internal properties such as texture or skeletal shape

bull description is also called feature extraction ndash extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another

Object recognition

the process of assigning a label (eg bdquovehiclerdquo) to an object based on itsdescriptors

Knowledge database

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Simplified diagramof a cross sectionof the human eye

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Three membranes enclose the eye the cornea and sclera outer cover the choroid the retina

The cornea is a tough transparent tissue that covers the anterior surface of the eye

Continuous with the cornea the sclera is an opaque membrane that enclosesthe remainder of the optic globe

The choroid lies directly below the sclera This membrane contains a network of blood vessels (major nutrition of the eye) The choroid is pigmented and help reduce the amount of light entering the eyeThe choroid is divided (at its anterior extreme) into the ciliary body and the irisThe iris contracts and expands to control the amount of light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to ciliary body (60-70 water 6 fat protein) The lens is colored in slightly yellow The lens absorbs approximatively 8 of the visiblelight (infrared and ultraviolet light are absorbed by proteins in lens)

The innermost membrane is the retina When the eye is proper focusedlight from an object outside the eye is imaged on the retinaVision is possible because of the distribution of discrete light receptors on the surface of the retina cones and rods (6-7 milion cones 75-150 milion rods)

Cones located in the central part of the retina (fovea) they are sensitive to colors vision of detail each cone is link to its own nervecone vision = photopic or bright-light vision

Fovea = the place where the image of the object of interest falls on

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Rods distributed over al the retina surface several rods are contected toa single nerve not specialized in detail vision serve to give a general overall picture of the filed of viewnot involved in color visionsensitive to low level of illumination

Blind spot region without receptors

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 22: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

X-ray imaging

Medical diagnosticindustry astronomy

A X-ray tube is a vacuum tube with a cathode and an anode The cathode is heated causing free electrons to be released The electrons flows at high speed to the positively charged anode When the electrons strike a nucleus energy is released in the form of a X-ray radiation The energy (penetreting power) of the X-rays is controlled by a voltage applied across the anode and by a curent applied to the filament in the cathode

The intensity of the X-rays is modified by absorbtion as they pass through the patient and the resulting energy falling develops it much in the same way that light develops photographic film

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Angiography = contrast-enhancement radiography

Angiograms = images of blood vessels

A catheter is inserted into an artery or vein in the groin The catheter is threaded into the blood vessel and guided to the area to be studied When it reaches the area to be studied a X-ray contrast medium is injected through the catheter This enhances contrast of the blood vessels and enables radiologist to see anyirregularities or blockages

X-rays are used in CAT (computerized axial tomography)

X-rays used in industrial processes (examine circuit boards for flows in manifacturing)

Industrial CAT scans are useful when the parts can be penetreted by X-rays

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of X-ray imaging

Chest X-rayAortic angiogram

Head CT Cygnus LoopCircuit boards

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Ultraviolet Band

Litography industrial inspection microscopy biological imaging astronomical observations

Ultraviolet light is used in fluorescence microscopy

Ultraviolet light is not visible to human eye but when a photon of ultraviolet radiation collides with an electron in an atom of a fluorescent material it elevates the electron to a higher energy level After that the electron relaxes to a lower level and emits light in the form of a lower-energy photon in the visible (red) light regionFluorescence = emission of light by a substance that has absorbed light or

other electromagentic radiation of a different wavelengthFlourescence microscope = uses an excitation light to irradiate a prepared specimen

and then it separates the much weaker radiating fluorescent light from the brighter excitation light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Visible and Infrared Bands

Light microscopy astronomy remote sensing industry law enforcement

LANDSAT satellite ndash obtained and transmitted images of the Earth from space for purpose of monitoring environmental conditions on the planet

Weather observations and prediction produce major applications of multispectral image from satellites

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Table 1 ndash Thematic bands in NASArsquos LANDSAT satellite

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Satellite images of Washington DC area in spectral bands of the Table 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of light microscopy

Taxol (anticancer agent)magnified 250X

Cholesterol(40X)

Microprocessor(60X)

Nickel oxidethin film(600X)

Surface of audio CD(1750X)

Organicsuperconductor(450X)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Automated visual inspection of manufactured goods

a bc de f

a ndash a circuit board controllerb ndash packaged pillesc ndash bottlesd ndash air bubbles in clear-plastic producte ndash cerealf ndash image of intraocular implant

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Microwave Band

The dominant aplication of imaging in the microwave band ndash radar

bull radar has the ability to collect data over virtually any region at any time regardless of weather or ambient light conditions

bull some radar waves can penetrate clouds under certain conditions canpenetrate vegetation ice dry sand

bull sometimes radar is the only way to explore inaccessible regions of theEarthrsquos surface

An imaging radar works like a flash camera it provides its own illumination(microwave pulses) to light an area on the ground and take a snapshot image Instead of a camera lens a radar uses an antenna and a digital device torecord the images In a radar image one can see only the microwave energythat was reflected back toward the radar antenna

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Spaceborne radar image of mountains in southeast Tibet

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Radio Band

medicine astronomy

MRI = Magnetic Resonance Imaging

This technique places the patient in a powerful magnet and passes short pulses of radio waves through his or her body

Each pulse causes a responding pulse of radio waves to be emited by the patinet tissues

The location from which these signals originate and their strength are determined by a computer which produces a 2D picture of a section of the patient

Digital Image ProcessingDigital Image Processing

Week 1Week 1

MRI images of a human knee (left) and spine (right)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Images of the Crab Pulsar covering the electromagnetic spectrum

Gamma X-ray Optical Infrared Radio

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Other Imaging Modalities

acoustic imaging electron microscopy synthetic (computer-generated) imaging

Imaging using sound geological explorations industry medicine

Mineral and oil exploration

For image acquisition over land one of the main approaches is to use a large truck an a large flat steel plate The plate is pressed on the ground by the truck and the truck is vibratedthrough a frequency spectrum up to 100 Hz The strength and the speed of the returning sound waves are determined by the composition of the Earth below the surface These are analysed by a computer and images are generated from the resulting analysis

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - iris

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - fingerprint

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Face detection and recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gender identification

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image morphing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Fundamental Steps in DIP

methods whose input and output are images

methods whose inputs are images but whose outputs are attributes extracted from those images

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are images

bull image acquisition

bull image filtering and enhancement

bull image restoration

bull color image processing

bull wavelets and multiresolution processing

bull compression

bull morphological processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are attributes

bull morphological processing

bull segmentation

bull representation and description

bull object recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image acquisition - may involve preprocessing such as scaling

Image enhancement

bull manipulating an image so that the result is more suitable than the original for a specific operation

bull enhancement is problem oriented

bull there is no general sbquotheoryrsquo of image enhancement

bull enhancement use subjective methods for image emprovement

bull enhancement is based on human subjective preferences regarding what is a bdquogoodrdquo enhancement result

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image restoration

bull improving the appearance of an image

bull restoration is objective - the techniques for restoration are based on mathematical or probabilistic models of image degradation

Color image processing

bull fundamental concept in color models

bull basic color processing in a digital domain

Wavelets and multiresolution processing

representing images in various degree of resolution

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Compression

reducing the storage required to save an image or the bandwidth required to transmit it

Morphological processing

bull tools for extracting image components that are useful in the representation and description of shape

bull a transition from processes that output images to processes that outputimage attributes

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Segmentation

bull partitioning an image into its constituents parts or objects

bull autonomous segmentation is one of the most difficult tasks of DIP

bull the more accurate the segmentation the more likley recognition is to succeed

Representation and description (almost always follows segmentation)

bull segmentation produces either the boundary of a region or all the poits in the region itself

bull converting the data produced by segmentation to a form suitable for computer processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

bull boundary representation the focus is on external shape characteristics such as corners or inflections

bull complete region the focus is on internal properties such as texture or skeletal shape

bull description is also called feature extraction ndash extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another

Object recognition

the process of assigning a label (eg bdquovehiclerdquo) to an object based on itsdescriptors

Knowledge database

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Simplified diagramof a cross sectionof the human eye

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Three membranes enclose the eye the cornea and sclera outer cover the choroid the retina

The cornea is a tough transparent tissue that covers the anterior surface of the eye

Continuous with the cornea the sclera is an opaque membrane that enclosesthe remainder of the optic globe

The choroid lies directly below the sclera This membrane contains a network of blood vessels (major nutrition of the eye) The choroid is pigmented and help reduce the amount of light entering the eyeThe choroid is divided (at its anterior extreme) into the ciliary body and the irisThe iris contracts and expands to control the amount of light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to ciliary body (60-70 water 6 fat protein) The lens is colored in slightly yellow The lens absorbs approximatively 8 of the visiblelight (infrared and ultraviolet light are absorbed by proteins in lens)

The innermost membrane is the retina When the eye is proper focusedlight from an object outside the eye is imaged on the retinaVision is possible because of the distribution of discrete light receptors on the surface of the retina cones and rods (6-7 milion cones 75-150 milion rods)

Cones located in the central part of the retina (fovea) they are sensitive to colors vision of detail each cone is link to its own nervecone vision = photopic or bright-light vision

Fovea = the place where the image of the object of interest falls on

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Rods distributed over al the retina surface several rods are contected toa single nerve not specialized in detail vision serve to give a general overall picture of the filed of viewnot involved in color visionsensitive to low level of illumination

Blind spot region without receptors

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 23: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Angiography = contrast-enhancement radiography

Angiograms = images of blood vessels

A catheter is inserted into an artery or vein in the groin The catheter is threaded into the blood vessel and guided to the area to be studied When it reaches the area to be studied a X-ray contrast medium is injected through the catheter This enhances contrast of the blood vessels and enables radiologist to see anyirregularities or blockages

X-rays are used in CAT (computerized axial tomography)

X-rays used in industrial processes (examine circuit boards for flows in manifacturing)

Industrial CAT scans are useful when the parts can be penetreted by X-rays

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of X-ray imaging

Chest X-rayAortic angiogram

Head CT Cygnus LoopCircuit boards

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Ultraviolet Band

Litography industrial inspection microscopy biological imaging astronomical observations

Ultraviolet light is used in fluorescence microscopy

Ultraviolet light is not visible to human eye but when a photon of ultraviolet radiation collides with an electron in an atom of a fluorescent material it elevates the electron to a higher energy level After that the electron relaxes to a lower level and emits light in the form of a lower-energy photon in the visible (red) light regionFluorescence = emission of light by a substance that has absorbed light or

other electromagentic radiation of a different wavelengthFlourescence microscope = uses an excitation light to irradiate a prepared specimen

and then it separates the much weaker radiating fluorescent light from the brighter excitation light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Visible and Infrared Bands

Light microscopy astronomy remote sensing industry law enforcement

LANDSAT satellite ndash obtained and transmitted images of the Earth from space for purpose of monitoring environmental conditions on the planet

Weather observations and prediction produce major applications of multispectral image from satellites

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Table 1 ndash Thematic bands in NASArsquos LANDSAT satellite

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Satellite images of Washington DC area in spectral bands of the Table 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of light microscopy

Taxol (anticancer agent)magnified 250X

Cholesterol(40X)

Microprocessor(60X)

Nickel oxidethin film(600X)

Surface of audio CD(1750X)

Organicsuperconductor(450X)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Automated visual inspection of manufactured goods

a bc de f

a ndash a circuit board controllerb ndash packaged pillesc ndash bottlesd ndash air bubbles in clear-plastic producte ndash cerealf ndash image of intraocular implant

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Microwave Band

The dominant aplication of imaging in the microwave band ndash radar

bull radar has the ability to collect data over virtually any region at any time regardless of weather or ambient light conditions

bull some radar waves can penetrate clouds under certain conditions canpenetrate vegetation ice dry sand

bull sometimes radar is the only way to explore inaccessible regions of theEarthrsquos surface

An imaging radar works like a flash camera it provides its own illumination(microwave pulses) to light an area on the ground and take a snapshot image Instead of a camera lens a radar uses an antenna and a digital device torecord the images In a radar image one can see only the microwave energythat was reflected back toward the radar antenna

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Spaceborne radar image of mountains in southeast Tibet

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Radio Band

medicine astronomy

MRI = Magnetic Resonance Imaging

This technique places the patient in a powerful magnet and passes short pulses of radio waves through his or her body

Each pulse causes a responding pulse of radio waves to be emited by the patinet tissues

The location from which these signals originate and their strength are determined by a computer which produces a 2D picture of a section of the patient

Digital Image ProcessingDigital Image Processing

Week 1Week 1

MRI images of a human knee (left) and spine (right)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Images of the Crab Pulsar covering the electromagnetic spectrum

Gamma X-ray Optical Infrared Radio

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Other Imaging Modalities

acoustic imaging electron microscopy synthetic (computer-generated) imaging

Imaging using sound geological explorations industry medicine

Mineral and oil exploration

For image acquisition over land one of the main approaches is to use a large truck an a large flat steel plate The plate is pressed on the ground by the truck and the truck is vibratedthrough a frequency spectrum up to 100 Hz The strength and the speed of the returning sound waves are determined by the composition of the Earth below the surface These are analysed by a computer and images are generated from the resulting analysis

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - iris

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - fingerprint

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Face detection and recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gender identification

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image morphing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Fundamental Steps in DIP

methods whose input and output are images

methods whose inputs are images but whose outputs are attributes extracted from those images

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are images

bull image acquisition

bull image filtering and enhancement

bull image restoration

bull color image processing

bull wavelets and multiresolution processing

bull compression

bull morphological processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are attributes

bull morphological processing

bull segmentation

bull representation and description

bull object recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image acquisition - may involve preprocessing such as scaling

Image enhancement

bull manipulating an image so that the result is more suitable than the original for a specific operation

bull enhancement is problem oriented

bull there is no general sbquotheoryrsquo of image enhancement

bull enhancement use subjective methods for image emprovement

bull enhancement is based on human subjective preferences regarding what is a bdquogoodrdquo enhancement result

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image restoration

bull improving the appearance of an image

bull restoration is objective - the techniques for restoration are based on mathematical or probabilistic models of image degradation

Color image processing

bull fundamental concept in color models

bull basic color processing in a digital domain

Wavelets and multiresolution processing

representing images in various degree of resolution

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Compression

reducing the storage required to save an image or the bandwidth required to transmit it

Morphological processing

bull tools for extracting image components that are useful in the representation and description of shape

bull a transition from processes that output images to processes that outputimage attributes

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Segmentation

bull partitioning an image into its constituents parts or objects

bull autonomous segmentation is one of the most difficult tasks of DIP

bull the more accurate the segmentation the more likley recognition is to succeed

Representation and description (almost always follows segmentation)

bull segmentation produces either the boundary of a region or all the poits in the region itself

bull converting the data produced by segmentation to a form suitable for computer processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

bull boundary representation the focus is on external shape characteristics such as corners or inflections

bull complete region the focus is on internal properties such as texture or skeletal shape

bull description is also called feature extraction ndash extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another

Object recognition

the process of assigning a label (eg bdquovehiclerdquo) to an object based on itsdescriptors

Knowledge database

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Simplified diagramof a cross sectionof the human eye

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Three membranes enclose the eye the cornea and sclera outer cover the choroid the retina

The cornea is a tough transparent tissue that covers the anterior surface of the eye

Continuous with the cornea the sclera is an opaque membrane that enclosesthe remainder of the optic globe

The choroid lies directly below the sclera This membrane contains a network of blood vessels (major nutrition of the eye) The choroid is pigmented and help reduce the amount of light entering the eyeThe choroid is divided (at its anterior extreme) into the ciliary body and the irisThe iris contracts and expands to control the amount of light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to ciliary body (60-70 water 6 fat protein) The lens is colored in slightly yellow The lens absorbs approximatively 8 of the visiblelight (infrared and ultraviolet light are absorbed by proteins in lens)

The innermost membrane is the retina When the eye is proper focusedlight from an object outside the eye is imaged on the retinaVision is possible because of the distribution of discrete light receptors on the surface of the retina cones and rods (6-7 milion cones 75-150 milion rods)

Cones located in the central part of the retina (fovea) they are sensitive to colors vision of detail each cone is link to its own nervecone vision = photopic or bright-light vision

Fovea = the place where the image of the object of interest falls on

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Rods distributed over al the retina surface several rods are contected toa single nerve not specialized in detail vision serve to give a general overall picture of the filed of viewnot involved in color visionsensitive to low level of illumination

Blind spot region without receptors

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 24: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of X-ray imaging

Chest X-rayAortic angiogram

Head CT Cygnus LoopCircuit boards

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Ultraviolet Band

Litography industrial inspection microscopy biological imaging astronomical observations

Ultraviolet light is used in fluorescence microscopy

Ultraviolet light is not visible to human eye but when a photon of ultraviolet radiation collides with an electron in an atom of a fluorescent material it elevates the electron to a higher energy level After that the electron relaxes to a lower level and emits light in the form of a lower-energy photon in the visible (red) light regionFluorescence = emission of light by a substance that has absorbed light or

other electromagentic radiation of a different wavelengthFlourescence microscope = uses an excitation light to irradiate a prepared specimen

and then it separates the much weaker radiating fluorescent light from the brighter excitation light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Visible and Infrared Bands

Light microscopy astronomy remote sensing industry law enforcement

LANDSAT satellite ndash obtained and transmitted images of the Earth from space for purpose of monitoring environmental conditions on the planet

Weather observations and prediction produce major applications of multispectral image from satellites

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Table 1 ndash Thematic bands in NASArsquos LANDSAT satellite

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Satellite images of Washington DC area in spectral bands of the Table 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of light microscopy

Taxol (anticancer agent)magnified 250X

Cholesterol(40X)

Microprocessor(60X)

Nickel oxidethin film(600X)

Surface of audio CD(1750X)

Organicsuperconductor(450X)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Automated visual inspection of manufactured goods

a bc de f

a ndash a circuit board controllerb ndash packaged pillesc ndash bottlesd ndash air bubbles in clear-plastic producte ndash cerealf ndash image of intraocular implant

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Microwave Band

The dominant aplication of imaging in the microwave band ndash radar

bull radar has the ability to collect data over virtually any region at any time regardless of weather or ambient light conditions

bull some radar waves can penetrate clouds under certain conditions canpenetrate vegetation ice dry sand

bull sometimes radar is the only way to explore inaccessible regions of theEarthrsquos surface

An imaging radar works like a flash camera it provides its own illumination(microwave pulses) to light an area on the ground and take a snapshot image Instead of a camera lens a radar uses an antenna and a digital device torecord the images In a radar image one can see only the microwave energythat was reflected back toward the radar antenna

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Spaceborne radar image of mountains in southeast Tibet

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Radio Band

medicine astronomy

MRI = Magnetic Resonance Imaging

This technique places the patient in a powerful magnet and passes short pulses of radio waves through his or her body

Each pulse causes a responding pulse of radio waves to be emited by the patinet tissues

The location from which these signals originate and their strength are determined by a computer which produces a 2D picture of a section of the patient

Digital Image ProcessingDigital Image Processing

Week 1Week 1

MRI images of a human knee (left) and spine (right)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Images of the Crab Pulsar covering the electromagnetic spectrum

Gamma X-ray Optical Infrared Radio

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Other Imaging Modalities

acoustic imaging electron microscopy synthetic (computer-generated) imaging

Imaging using sound geological explorations industry medicine

Mineral and oil exploration

For image acquisition over land one of the main approaches is to use a large truck an a large flat steel plate The plate is pressed on the ground by the truck and the truck is vibratedthrough a frequency spectrum up to 100 Hz The strength and the speed of the returning sound waves are determined by the composition of the Earth below the surface These are analysed by a computer and images are generated from the resulting analysis

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - iris

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - fingerprint

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Face detection and recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gender identification

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image morphing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Fundamental Steps in DIP

methods whose input and output are images

methods whose inputs are images but whose outputs are attributes extracted from those images

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are images

bull image acquisition

bull image filtering and enhancement

bull image restoration

bull color image processing

bull wavelets and multiresolution processing

bull compression

bull morphological processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are attributes

bull morphological processing

bull segmentation

bull representation and description

bull object recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image acquisition - may involve preprocessing such as scaling

Image enhancement

bull manipulating an image so that the result is more suitable than the original for a specific operation

bull enhancement is problem oriented

bull there is no general sbquotheoryrsquo of image enhancement

bull enhancement use subjective methods for image emprovement

bull enhancement is based on human subjective preferences regarding what is a bdquogoodrdquo enhancement result

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image restoration

bull improving the appearance of an image

bull restoration is objective - the techniques for restoration are based on mathematical or probabilistic models of image degradation

Color image processing

bull fundamental concept in color models

bull basic color processing in a digital domain

Wavelets and multiresolution processing

representing images in various degree of resolution

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Compression

reducing the storage required to save an image or the bandwidth required to transmit it

Morphological processing

bull tools for extracting image components that are useful in the representation and description of shape

bull a transition from processes that output images to processes that outputimage attributes

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Segmentation

bull partitioning an image into its constituents parts or objects

bull autonomous segmentation is one of the most difficult tasks of DIP

bull the more accurate the segmentation the more likley recognition is to succeed

Representation and description (almost always follows segmentation)

bull segmentation produces either the boundary of a region or all the poits in the region itself

bull converting the data produced by segmentation to a form suitable for computer processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

bull boundary representation the focus is on external shape characteristics such as corners or inflections

bull complete region the focus is on internal properties such as texture or skeletal shape

bull description is also called feature extraction ndash extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another

Object recognition

the process of assigning a label (eg bdquovehiclerdquo) to an object based on itsdescriptors

Knowledge database

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Simplified diagramof a cross sectionof the human eye

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Three membranes enclose the eye the cornea and sclera outer cover the choroid the retina

The cornea is a tough transparent tissue that covers the anterior surface of the eye

Continuous with the cornea the sclera is an opaque membrane that enclosesthe remainder of the optic globe

The choroid lies directly below the sclera This membrane contains a network of blood vessels (major nutrition of the eye) The choroid is pigmented and help reduce the amount of light entering the eyeThe choroid is divided (at its anterior extreme) into the ciliary body and the irisThe iris contracts and expands to control the amount of light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to ciliary body (60-70 water 6 fat protein) The lens is colored in slightly yellow The lens absorbs approximatively 8 of the visiblelight (infrared and ultraviolet light are absorbed by proteins in lens)

The innermost membrane is the retina When the eye is proper focusedlight from an object outside the eye is imaged on the retinaVision is possible because of the distribution of discrete light receptors on the surface of the retina cones and rods (6-7 milion cones 75-150 milion rods)

Cones located in the central part of the retina (fovea) they are sensitive to colors vision of detail each cone is link to its own nervecone vision = photopic or bright-light vision

Fovea = the place where the image of the object of interest falls on

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Rods distributed over al the retina surface several rods are contected toa single nerve not specialized in detail vision serve to give a general overall picture of the filed of viewnot involved in color visionsensitive to low level of illumination

Blind spot region without receptors

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 25: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Ultraviolet Band

Litography industrial inspection microscopy biological imaging astronomical observations

Ultraviolet light is used in fluorescence microscopy

Ultraviolet light is not visible to human eye but when a photon of ultraviolet radiation collides with an electron in an atom of a fluorescent material it elevates the electron to a higher energy level After that the electron relaxes to a lower level and emits light in the form of a lower-energy photon in the visible (red) light regionFluorescence = emission of light by a substance that has absorbed light or

other electromagentic radiation of a different wavelengthFlourescence microscope = uses an excitation light to irradiate a prepared specimen

and then it separates the much weaker radiating fluorescent light from the brighter excitation light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Visible and Infrared Bands

Light microscopy astronomy remote sensing industry law enforcement

LANDSAT satellite ndash obtained and transmitted images of the Earth from space for purpose of monitoring environmental conditions on the planet

Weather observations and prediction produce major applications of multispectral image from satellites

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Table 1 ndash Thematic bands in NASArsquos LANDSAT satellite

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Satellite images of Washington DC area in spectral bands of the Table 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of light microscopy

Taxol (anticancer agent)magnified 250X

Cholesterol(40X)

Microprocessor(60X)

Nickel oxidethin film(600X)

Surface of audio CD(1750X)

Organicsuperconductor(450X)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Automated visual inspection of manufactured goods

a bc de f

a ndash a circuit board controllerb ndash packaged pillesc ndash bottlesd ndash air bubbles in clear-plastic producte ndash cerealf ndash image of intraocular implant

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Microwave Band

The dominant aplication of imaging in the microwave band ndash radar

bull radar has the ability to collect data over virtually any region at any time regardless of weather or ambient light conditions

bull some radar waves can penetrate clouds under certain conditions canpenetrate vegetation ice dry sand

bull sometimes radar is the only way to explore inaccessible regions of theEarthrsquos surface

An imaging radar works like a flash camera it provides its own illumination(microwave pulses) to light an area on the ground and take a snapshot image Instead of a camera lens a radar uses an antenna and a digital device torecord the images In a radar image one can see only the microwave energythat was reflected back toward the radar antenna

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Spaceborne radar image of mountains in southeast Tibet

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Radio Band

medicine astronomy

MRI = Magnetic Resonance Imaging

This technique places the patient in a powerful magnet and passes short pulses of radio waves through his or her body

Each pulse causes a responding pulse of radio waves to be emited by the patinet tissues

The location from which these signals originate and their strength are determined by a computer which produces a 2D picture of a section of the patient

Digital Image ProcessingDigital Image Processing

Week 1Week 1

MRI images of a human knee (left) and spine (right)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Images of the Crab Pulsar covering the electromagnetic spectrum

Gamma X-ray Optical Infrared Radio

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Other Imaging Modalities

acoustic imaging electron microscopy synthetic (computer-generated) imaging

Imaging using sound geological explorations industry medicine

Mineral and oil exploration

For image acquisition over land one of the main approaches is to use a large truck an a large flat steel plate The plate is pressed on the ground by the truck and the truck is vibratedthrough a frequency spectrum up to 100 Hz The strength and the speed of the returning sound waves are determined by the composition of the Earth below the surface These are analysed by a computer and images are generated from the resulting analysis

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - iris

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - fingerprint

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Face detection and recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gender identification

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image morphing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Fundamental Steps in DIP

methods whose input and output are images

methods whose inputs are images but whose outputs are attributes extracted from those images

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are images

bull image acquisition

bull image filtering and enhancement

bull image restoration

bull color image processing

bull wavelets and multiresolution processing

bull compression

bull morphological processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are attributes

bull morphological processing

bull segmentation

bull representation and description

bull object recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image acquisition - may involve preprocessing such as scaling

Image enhancement

bull manipulating an image so that the result is more suitable than the original for a specific operation

bull enhancement is problem oriented

bull there is no general sbquotheoryrsquo of image enhancement

bull enhancement use subjective methods for image emprovement

bull enhancement is based on human subjective preferences regarding what is a bdquogoodrdquo enhancement result

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image restoration

bull improving the appearance of an image

bull restoration is objective - the techniques for restoration are based on mathematical or probabilistic models of image degradation

Color image processing

bull fundamental concept in color models

bull basic color processing in a digital domain

Wavelets and multiresolution processing

representing images in various degree of resolution

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Compression

reducing the storage required to save an image or the bandwidth required to transmit it

Morphological processing

bull tools for extracting image components that are useful in the representation and description of shape

bull a transition from processes that output images to processes that outputimage attributes

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Segmentation

bull partitioning an image into its constituents parts or objects

bull autonomous segmentation is one of the most difficult tasks of DIP

bull the more accurate the segmentation the more likley recognition is to succeed

Representation and description (almost always follows segmentation)

bull segmentation produces either the boundary of a region or all the poits in the region itself

bull converting the data produced by segmentation to a form suitable for computer processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

bull boundary representation the focus is on external shape characteristics such as corners or inflections

bull complete region the focus is on internal properties such as texture or skeletal shape

bull description is also called feature extraction ndash extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another

Object recognition

the process of assigning a label (eg bdquovehiclerdquo) to an object based on itsdescriptors

Knowledge database

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Simplified diagramof a cross sectionof the human eye

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Three membranes enclose the eye the cornea and sclera outer cover the choroid the retina

The cornea is a tough transparent tissue that covers the anterior surface of the eye

Continuous with the cornea the sclera is an opaque membrane that enclosesthe remainder of the optic globe

The choroid lies directly below the sclera This membrane contains a network of blood vessels (major nutrition of the eye) The choroid is pigmented and help reduce the amount of light entering the eyeThe choroid is divided (at its anterior extreme) into the ciliary body and the irisThe iris contracts and expands to control the amount of light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to ciliary body (60-70 water 6 fat protein) The lens is colored in slightly yellow The lens absorbs approximatively 8 of the visiblelight (infrared and ultraviolet light are absorbed by proteins in lens)

The innermost membrane is the retina When the eye is proper focusedlight from an object outside the eye is imaged on the retinaVision is possible because of the distribution of discrete light receptors on the surface of the retina cones and rods (6-7 milion cones 75-150 milion rods)

Cones located in the central part of the retina (fovea) they are sensitive to colors vision of detail each cone is link to its own nervecone vision = photopic or bright-light vision

Fovea = the place where the image of the object of interest falls on

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Rods distributed over al the retina surface several rods are contected toa single nerve not specialized in detail vision serve to give a general overall picture of the filed of viewnot involved in color visionsensitive to low level of illumination

Blind spot region without receptors

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 26: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Visible and Infrared Bands

Light microscopy astronomy remote sensing industry law enforcement

LANDSAT satellite ndash obtained and transmitted images of the Earth from space for purpose of monitoring environmental conditions on the planet

Weather observations and prediction produce major applications of multispectral image from satellites

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Table 1 ndash Thematic bands in NASArsquos LANDSAT satellite

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Satellite images of Washington DC area in spectral bands of the Table 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of light microscopy

Taxol (anticancer agent)magnified 250X

Cholesterol(40X)

Microprocessor(60X)

Nickel oxidethin film(600X)

Surface of audio CD(1750X)

Organicsuperconductor(450X)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Automated visual inspection of manufactured goods

a bc de f

a ndash a circuit board controllerb ndash packaged pillesc ndash bottlesd ndash air bubbles in clear-plastic producte ndash cerealf ndash image of intraocular implant

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Microwave Band

The dominant aplication of imaging in the microwave band ndash radar

bull radar has the ability to collect data over virtually any region at any time regardless of weather or ambient light conditions

bull some radar waves can penetrate clouds under certain conditions canpenetrate vegetation ice dry sand

bull sometimes radar is the only way to explore inaccessible regions of theEarthrsquos surface

An imaging radar works like a flash camera it provides its own illumination(microwave pulses) to light an area on the ground and take a snapshot image Instead of a camera lens a radar uses an antenna and a digital device torecord the images In a radar image one can see only the microwave energythat was reflected back toward the radar antenna

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Spaceborne radar image of mountains in southeast Tibet

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Radio Band

medicine astronomy

MRI = Magnetic Resonance Imaging

This technique places the patient in a powerful magnet and passes short pulses of radio waves through his or her body

Each pulse causes a responding pulse of radio waves to be emited by the patinet tissues

The location from which these signals originate and their strength are determined by a computer which produces a 2D picture of a section of the patient

Digital Image ProcessingDigital Image Processing

Week 1Week 1

MRI images of a human knee (left) and spine (right)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Images of the Crab Pulsar covering the electromagnetic spectrum

Gamma X-ray Optical Infrared Radio

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Other Imaging Modalities

acoustic imaging electron microscopy synthetic (computer-generated) imaging

Imaging using sound geological explorations industry medicine

Mineral and oil exploration

For image acquisition over land one of the main approaches is to use a large truck an a large flat steel plate The plate is pressed on the ground by the truck and the truck is vibratedthrough a frequency spectrum up to 100 Hz The strength and the speed of the returning sound waves are determined by the composition of the Earth below the surface These are analysed by a computer and images are generated from the resulting analysis

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - iris

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - fingerprint

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Face detection and recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gender identification

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image morphing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Fundamental Steps in DIP

methods whose input and output are images

methods whose inputs are images but whose outputs are attributes extracted from those images

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are images

bull image acquisition

bull image filtering and enhancement

bull image restoration

bull color image processing

bull wavelets and multiresolution processing

bull compression

bull morphological processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are attributes

bull morphological processing

bull segmentation

bull representation and description

bull object recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image acquisition - may involve preprocessing such as scaling

Image enhancement

bull manipulating an image so that the result is more suitable than the original for a specific operation

bull enhancement is problem oriented

bull there is no general sbquotheoryrsquo of image enhancement

bull enhancement use subjective methods for image emprovement

bull enhancement is based on human subjective preferences regarding what is a bdquogoodrdquo enhancement result

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image restoration

bull improving the appearance of an image

bull restoration is objective - the techniques for restoration are based on mathematical or probabilistic models of image degradation

Color image processing

bull fundamental concept in color models

bull basic color processing in a digital domain

Wavelets and multiresolution processing

representing images in various degree of resolution

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Compression

reducing the storage required to save an image or the bandwidth required to transmit it

Morphological processing

bull tools for extracting image components that are useful in the representation and description of shape

bull a transition from processes that output images to processes that outputimage attributes

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Segmentation

bull partitioning an image into its constituents parts or objects

bull autonomous segmentation is one of the most difficult tasks of DIP

bull the more accurate the segmentation the more likley recognition is to succeed

Representation and description (almost always follows segmentation)

bull segmentation produces either the boundary of a region or all the poits in the region itself

bull converting the data produced by segmentation to a form suitable for computer processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

bull boundary representation the focus is on external shape characteristics such as corners or inflections

bull complete region the focus is on internal properties such as texture or skeletal shape

bull description is also called feature extraction ndash extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another

Object recognition

the process of assigning a label (eg bdquovehiclerdquo) to an object based on itsdescriptors

Knowledge database

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Simplified diagramof a cross sectionof the human eye

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Three membranes enclose the eye the cornea and sclera outer cover the choroid the retina

The cornea is a tough transparent tissue that covers the anterior surface of the eye

Continuous with the cornea the sclera is an opaque membrane that enclosesthe remainder of the optic globe

The choroid lies directly below the sclera This membrane contains a network of blood vessels (major nutrition of the eye) The choroid is pigmented and help reduce the amount of light entering the eyeThe choroid is divided (at its anterior extreme) into the ciliary body and the irisThe iris contracts and expands to control the amount of light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to ciliary body (60-70 water 6 fat protein) The lens is colored in slightly yellow The lens absorbs approximatively 8 of the visiblelight (infrared and ultraviolet light are absorbed by proteins in lens)

The innermost membrane is the retina When the eye is proper focusedlight from an object outside the eye is imaged on the retinaVision is possible because of the distribution of discrete light receptors on the surface of the retina cones and rods (6-7 milion cones 75-150 milion rods)

Cones located in the central part of the retina (fovea) they are sensitive to colors vision of detail each cone is link to its own nervecone vision = photopic or bright-light vision

Fovea = the place where the image of the object of interest falls on

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Rods distributed over al the retina surface several rods are contected toa single nerve not specialized in detail vision serve to give a general overall picture of the filed of viewnot involved in color visionsensitive to low level of illumination

Blind spot region without receptors

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 27: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Table 1 ndash Thematic bands in NASArsquos LANDSAT satellite

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Satellite images of Washington DC area in spectral bands of the Table 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of light microscopy

Taxol (anticancer agent)magnified 250X

Cholesterol(40X)

Microprocessor(60X)

Nickel oxidethin film(600X)

Surface of audio CD(1750X)

Organicsuperconductor(450X)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Automated visual inspection of manufactured goods

a bc de f

a ndash a circuit board controllerb ndash packaged pillesc ndash bottlesd ndash air bubbles in clear-plastic producte ndash cerealf ndash image of intraocular implant

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Microwave Band

The dominant aplication of imaging in the microwave band ndash radar

bull radar has the ability to collect data over virtually any region at any time regardless of weather or ambient light conditions

bull some radar waves can penetrate clouds under certain conditions canpenetrate vegetation ice dry sand

bull sometimes radar is the only way to explore inaccessible regions of theEarthrsquos surface

An imaging radar works like a flash camera it provides its own illumination(microwave pulses) to light an area on the ground and take a snapshot image Instead of a camera lens a radar uses an antenna and a digital device torecord the images In a radar image one can see only the microwave energythat was reflected back toward the radar antenna

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Spaceborne radar image of mountains in southeast Tibet

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Radio Band

medicine astronomy

MRI = Magnetic Resonance Imaging

This technique places the patient in a powerful magnet and passes short pulses of radio waves through his or her body

Each pulse causes a responding pulse of radio waves to be emited by the patinet tissues

The location from which these signals originate and their strength are determined by a computer which produces a 2D picture of a section of the patient

Digital Image ProcessingDigital Image Processing

Week 1Week 1

MRI images of a human knee (left) and spine (right)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Images of the Crab Pulsar covering the electromagnetic spectrum

Gamma X-ray Optical Infrared Radio

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Other Imaging Modalities

acoustic imaging electron microscopy synthetic (computer-generated) imaging

Imaging using sound geological explorations industry medicine

Mineral and oil exploration

For image acquisition over land one of the main approaches is to use a large truck an a large flat steel plate The plate is pressed on the ground by the truck and the truck is vibratedthrough a frequency spectrum up to 100 Hz The strength and the speed of the returning sound waves are determined by the composition of the Earth below the surface These are analysed by a computer and images are generated from the resulting analysis

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - iris

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - fingerprint

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Face detection and recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gender identification

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image morphing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Fundamental Steps in DIP

methods whose input and output are images

methods whose inputs are images but whose outputs are attributes extracted from those images

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are images

bull image acquisition

bull image filtering and enhancement

bull image restoration

bull color image processing

bull wavelets and multiresolution processing

bull compression

bull morphological processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are attributes

bull morphological processing

bull segmentation

bull representation and description

bull object recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image acquisition - may involve preprocessing such as scaling

Image enhancement

bull manipulating an image so that the result is more suitable than the original for a specific operation

bull enhancement is problem oriented

bull there is no general sbquotheoryrsquo of image enhancement

bull enhancement use subjective methods for image emprovement

bull enhancement is based on human subjective preferences regarding what is a bdquogoodrdquo enhancement result

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image restoration

bull improving the appearance of an image

bull restoration is objective - the techniques for restoration are based on mathematical or probabilistic models of image degradation

Color image processing

bull fundamental concept in color models

bull basic color processing in a digital domain

Wavelets and multiresolution processing

representing images in various degree of resolution

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Compression

reducing the storage required to save an image or the bandwidth required to transmit it

Morphological processing

bull tools for extracting image components that are useful in the representation and description of shape

bull a transition from processes that output images to processes that outputimage attributes

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Segmentation

bull partitioning an image into its constituents parts or objects

bull autonomous segmentation is one of the most difficult tasks of DIP

bull the more accurate the segmentation the more likley recognition is to succeed

Representation and description (almost always follows segmentation)

bull segmentation produces either the boundary of a region or all the poits in the region itself

bull converting the data produced by segmentation to a form suitable for computer processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

bull boundary representation the focus is on external shape characteristics such as corners or inflections

bull complete region the focus is on internal properties such as texture or skeletal shape

bull description is also called feature extraction ndash extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another

Object recognition

the process of assigning a label (eg bdquovehiclerdquo) to an object based on itsdescriptors

Knowledge database

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Simplified diagramof a cross sectionof the human eye

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Three membranes enclose the eye the cornea and sclera outer cover the choroid the retina

The cornea is a tough transparent tissue that covers the anterior surface of the eye

Continuous with the cornea the sclera is an opaque membrane that enclosesthe remainder of the optic globe

The choroid lies directly below the sclera This membrane contains a network of blood vessels (major nutrition of the eye) The choroid is pigmented and help reduce the amount of light entering the eyeThe choroid is divided (at its anterior extreme) into the ciliary body and the irisThe iris contracts and expands to control the amount of light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to ciliary body (60-70 water 6 fat protein) The lens is colored in slightly yellow The lens absorbs approximatively 8 of the visiblelight (infrared and ultraviolet light are absorbed by proteins in lens)

The innermost membrane is the retina When the eye is proper focusedlight from an object outside the eye is imaged on the retinaVision is possible because of the distribution of discrete light receptors on the surface of the retina cones and rods (6-7 milion cones 75-150 milion rods)

Cones located in the central part of the retina (fovea) they are sensitive to colors vision of detail each cone is link to its own nervecone vision = photopic or bright-light vision

Fovea = the place where the image of the object of interest falls on

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Rods distributed over al the retina surface several rods are contected toa single nerve not specialized in detail vision serve to give a general overall picture of the filed of viewnot involved in color visionsensitive to low level of illumination

Blind spot region without receptors

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 28: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Satellite images of Washington DC area in spectral bands of the Table 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of light microscopy

Taxol (anticancer agent)magnified 250X

Cholesterol(40X)

Microprocessor(60X)

Nickel oxidethin film(600X)

Surface of audio CD(1750X)

Organicsuperconductor(450X)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Automated visual inspection of manufactured goods

a bc de f

a ndash a circuit board controllerb ndash packaged pillesc ndash bottlesd ndash air bubbles in clear-plastic producte ndash cerealf ndash image of intraocular implant

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Microwave Band

The dominant aplication of imaging in the microwave band ndash radar

bull radar has the ability to collect data over virtually any region at any time regardless of weather or ambient light conditions

bull some radar waves can penetrate clouds under certain conditions canpenetrate vegetation ice dry sand

bull sometimes radar is the only way to explore inaccessible regions of theEarthrsquos surface

An imaging radar works like a flash camera it provides its own illumination(microwave pulses) to light an area on the ground and take a snapshot image Instead of a camera lens a radar uses an antenna and a digital device torecord the images In a radar image one can see only the microwave energythat was reflected back toward the radar antenna

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Spaceborne radar image of mountains in southeast Tibet

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Radio Band

medicine astronomy

MRI = Magnetic Resonance Imaging

This technique places the patient in a powerful magnet and passes short pulses of radio waves through his or her body

Each pulse causes a responding pulse of radio waves to be emited by the patinet tissues

The location from which these signals originate and their strength are determined by a computer which produces a 2D picture of a section of the patient

Digital Image ProcessingDigital Image Processing

Week 1Week 1

MRI images of a human knee (left) and spine (right)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Images of the Crab Pulsar covering the electromagnetic spectrum

Gamma X-ray Optical Infrared Radio

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Other Imaging Modalities

acoustic imaging electron microscopy synthetic (computer-generated) imaging

Imaging using sound geological explorations industry medicine

Mineral and oil exploration

For image acquisition over land one of the main approaches is to use a large truck an a large flat steel plate The plate is pressed on the ground by the truck and the truck is vibratedthrough a frequency spectrum up to 100 Hz The strength and the speed of the returning sound waves are determined by the composition of the Earth below the surface These are analysed by a computer and images are generated from the resulting analysis

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - iris

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - fingerprint

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Face detection and recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gender identification

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image morphing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Fundamental Steps in DIP

methods whose input and output are images

methods whose inputs are images but whose outputs are attributes extracted from those images

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are images

bull image acquisition

bull image filtering and enhancement

bull image restoration

bull color image processing

bull wavelets and multiresolution processing

bull compression

bull morphological processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are attributes

bull morphological processing

bull segmentation

bull representation and description

bull object recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image acquisition - may involve preprocessing such as scaling

Image enhancement

bull manipulating an image so that the result is more suitable than the original for a specific operation

bull enhancement is problem oriented

bull there is no general sbquotheoryrsquo of image enhancement

bull enhancement use subjective methods for image emprovement

bull enhancement is based on human subjective preferences regarding what is a bdquogoodrdquo enhancement result

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image restoration

bull improving the appearance of an image

bull restoration is objective - the techniques for restoration are based on mathematical or probabilistic models of image degradation

Color image processing

bull fundamental concept in color models

bull basic color processing in a digital domain

Wavelets and multiresolution processing

representing images in various degree of resolution

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Compression

reducing the storage required to save an image or the bandwidth required to transmit it

Morphological processing

bull tools for extracting image components that are useful in the representation and description of shape

bull a transition from processes that output images to processes that outputimage attributes

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Segmentation

bull partitioning an image into its constituents parts or objects

bull autonomous segmentation is one of the most difficult tasks of DIP

bull the more accurate the segmentation the more likley recognition is to succeed

Representation and description (almost always follows segmentation)

bull segmentation produces either the boundary of a region or all the poits in the region itself

bull converting the data produced by segmentation to a form suitable for computer processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

bull boundary representation the focus is on external shape characteristics such as corners or inflections

bull complete region the focus is on internal properties such as texture or skeletal shape

bull description is also called feature extraction ndash extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another

Object recognition

the process of assigning a label (eg bdquovehiclerdquo) to an object based on itsdescriptors

Knowledge database

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Simplified diagramof a cross sectionof the human eye

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Three membranes enclose the eye the cornea and sclera outer cover the choroid the retina

The cornea is a tough transparent tissue that covers the anterior surface of the eye

Continuous with the cornea the sclera is an opaque membrane that enclosesthe remainder of the optic globe

The choroid lies directly below the sclera This membrane contains a network of blood vessels (major nutrition of the eye) The choroid is pigmented and help reduce the amount of light entering the eyeThe choroid is divided (at its anterior extreme) into the ciliary body and the irisThe iris contracts and expands to control the amount of light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to ciliary body (60-70 water 6 fat protein) The lens is colored in slightly yellow The lens absorbs approximatively 8 of the visiblelight (infrared and ultraviolet light are absorbed by proteins in lens)

The innermost membrane is the retina When the eye is proper focusedlight from an object outside the eye is imaged on the retinaVision is possible because of the distribution of discrete light receptors on the surface of the retina cones and rods (6-7 milion cones 75-150 milion rods)

Cones located in the central part of the retina (fovea) they are sensitive to colors vision of detail each cone is link to its own nervecone vision = photopic or bright-light vision

Fovea = the place where the image of the object of interest falls on

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Rods distributed over al the retina surface several rods are contected toa single nerve not specialized in detail vision serve to give a general overall picture of the filed of viewnot involved in color visionsensitive to low level of illumination

Blind spot region without receptors

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 29: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Examples of light microscopy

Taxol (anticancer agent)magnified 250X

Cholesterol(40X)

Microprocessor(60X)

Nickel oxidethin film(600X)

Surface of audio CD(1750X)

Organicsuperconductor(450X)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Automated visual inspection of manufactured goods

a bc de f

a ndash a circuit board controllerb ndash packaged pillesc ndash bottlesd ndash air bubbles in clear-plastic producte ndash cerealf ndash image of intraocular implant

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Microwave Band

The dominant aplication of imaging in the microwave band ndash radar

bull radar has the ability to collect data over virtually any region at any time regardless of weather or ambient light conditions

bull some radar waves can penetrate clouds under certain conditions canpenetrate vegetation ice dry sand

bull sometimes radar is the only way to explore inaccessible regions of theEarthrsquos surface

An imaging radar works like a flash camera it provides its own illumination(microwave pulses) to light an area on the ground and take a snapshot image Instead of a camera lens a radar uses an antenna and a digital device torecord the images In a radar image one can see only the microwave energythat was reflected back toward the radar antenna

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Spaceborne radar image of mountains in southeast Tibet

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Radio Band

medicine astronomy

MRI = Magnetic Resonance Imaging

This technique places the patient in a powerful magnet and passes short pulses of radio waves through his or her body

Each pulse causes a responding pulse of radio waves to be emited by the patinet tissues

The location from which these signals originate and their strength are determined by a computer which produces a 2D picture of a section of the patient

Digital Image ProcessingDigital Image Processing

Week 1Week 1

MRI images of a human knee (left) and spine (right)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Images of the Crab Pulsar covering the electromagnetic spectrum

Gamma X-ray Optical Infrared Radio

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Other Imaging Modalities

acoustic imaging electron microscopy synthetic (computer-generated) imaging

Imaging using sound geological explorations industry medicine

Mineral and oil exploration

For image acquisition over land one of the main approaches is to use a large truck an a large flat steel plate The plate is pressed on the ground by the truck and the truck is vibratedthrough a frequency spectrum up to 100 Hz The strength and the speed of the returning sound waves are determined by the composition of the Earth below the surface These are analysed by a computer and images are generated from the resulting analysis

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - iris

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - fingerprint

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Face detection and recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gender identification

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image morphing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Fundamental Steps in DIP

methods whose input and output are images

methods whose inputs are images but whose outputs are attributes extracted from those images

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are images

bull image acquisition

bull image filtering and enhancement

bull image restoration

bull color image processing

bull wavelets and multiresolution processing

bull compression

bull morphological processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are attributes

bull morphological processing

bull segmentation

bull representation and description

bull object recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image acquisition - may involve preprocessing such as scaling

Image enhancement

bull manipulating an image so that the result is more suitable than the original for a specific operation

bull enhancement is problem oriented

bull there is no general sbquotheoryrsquo of image enhancement

bull enhancement use subjective methods for image emprovement

bull enhancement is based on human subjective preferences regarding what is a bdquogoodrdquo enhancement result

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image restoration

bull improving the appearance of an image

bull restoration is objective - the techniques for restoration are based on mathematical or probabilistic models of image degradation

Color image processing

bull fundamental concept in color models

bull basic color processing in a digital domain

Wavelets and multiresolution processing

representing images in various degree of resolution

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Compression

reducing the storage required to save an image or the bandwidth required to transmit it

Morphological processing

bull tools for extracting image components that are useful in the representation and description of shape

bull a transition from processes that output images to processes that outputimage attributes

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Segmentation

bull partitioning an image into its constituents parts or objects

bull autonomous segmentation is one of the most difficult tasks of DIP

bull the more accurate the segmentation the more likley recognition is to succeed

Representation and description (almost always follows segmentation)

bull segmentation produces either the boundary of a region or all the poits in the region itself

bull converting the data produced by segmentation to a form suitable for computer processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

bull boundary representation the focus is on external shape characteristics such as corners or inflections

bull complete region the focus is on internal properties such as texture or skeletal shape

bull description is also called feature extraction ndash extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another

Object recognition

the process of assigning a label (eg bdquovehiclerdquo) to an object based on itsdescriptors

Knowledge database

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Simplified diagramof a cross sectionof the human eye

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Three membranes enclose the eye the cornea and sclera outer cover the choroid the retina

The cornea is a tough transparent tissue that covers the anterior surface of the eye

Continuous with the cornea the sclera is an opaque membrane that enclosesthe remainder of the optic globe

The choroid lies directly below the sclera This membrane contains a network of blood vessels (major nutrition of the eye) The choroid is pigmented and help reduce the amount of light entering the eyeThe choroid is divided (at its anterior extreme) into the ciliary body and the irisThe iris contracts and expands to control the amount of light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to ciliary body (60-70 water 6 fat protein) The lens is colored in slightly yellow The lens absorbs approximatively 8 of the visiblelight (infrared and ultraviolet light are absorbed by proteins in lens)

The innermost membrane is the retina When the eye is proper focusedlight from an object outside the eye is imaged on the retinaVision is possible because of the distribution of discrete light receptors on the surface of the retina cones and rods (6-7 milion cones 75-150 milion rods)

Cones located in the central part of the retina (fovea) they are sensitive to colors vision of detail each cone is link to its own nervecone vision = photopic or bright-light vision

Fovea = the place where the image of the object of interest falls on

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Rods distributed over al the retina surface several rods are contected toa single nerve not specialized in detail vision serve to give a general overall picture of the filed of viewnot involved in color visionsensitive to low level of illumination

Blind spot region without receptors

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 30: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Automated visual inspection of manufactured goods

a bc de f

a ndash a circuit board controllerb ndash packaged pillesc ndash bottlesd ndash air bubbles in clear-plastic producte ndash cerealf ndash image of intraocular implant

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Microwave Band

The dominant aplication of imaging in the microwave band ndash radar

bull radar has the ability to collect data over virtually any region at any time regardless of weather or ambient light conditions

bull some radar waves can penetrate clouds under certain conditions canpenetrate vegetation ice dry sand

bull sometimes radar is the only way to explore inaccessible regions of theEarthrsquos surface

An imaging radar works like a flash camera it provides its own illumination(microwave pulses) to light an area on the ground and take a snapshot image Instead of a camera lens a radar uses an antenna and a digital device torecord the images In a radar image one can see only the microwave energythat was reflected back toward the radar antenna

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Spaceborne radar image of mountains in southeast Tibet

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Radio Band

medicine astronomy

MRI = Magnetic Resonance Imaging

This technique places the patient in a powerful magnet and passes short pulses of radio waves through his or her body

Each pulse causes a responding pulse of radio waves to be emited by the patinet tissues

The location from which these signals originate and their strength are determined by a computer which produces a 2D picture of a section of the patient

Digital Image ProcessingDigital Image Processing

Week 1Week 1

MRI images of a human knee (left) and spine (right)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Images of the Crab Pulsar covering the electromagnetic spectrum

Gamma X-ray Optical Infrared Radio

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Other Imaging Modalities

acoustic imaging electron microscopy synthetic (computer-generated) imaging

Imaging using sound geological explorations industry medicine

Mineral and oil exploration

For image acquisition over land one of the main approaches is to use a large truck an a large flat steel plate The plate is pressed on the ground by the truck and the truck is vibratedthrough a frequency spectrum up to 100 Hz The strength and the speed of the returning sound waves are determined by the composition of the Earth below the surface These are analysed by a computer and images are generated from the resulting analysis

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - iris

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - fingerprint

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Face detection and recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gender identification

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image morphing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Fundamental Steps in DIP

methods whose input and output are images

methods whose inputs are images but whose outputs are attributes extracted from those images

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are images

bull image acquisition

bull image filtering and enhancement

bull image restoration

bull color image processing

bull wavelets and multiresolution processing

bull compression

bull morphological processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are attributes

bull morphological processing

bull segmentation

bull representation and description

bull object recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image acquisition - may involve preprocessing such as scaling

Image enhancement

bull manipulating an image so that the result is more suitable than the original for a specific operation

bull enhancement is problem oriented

bull there is no general sbquotheoryrsquo of image enhancement

bull enhancement use subjective methods for image emprovement

bull enhancement is based on human subjective preferences regarding what is a bdquogoodrdquo enhancement result

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image restoration

bull improving the appearance of an image

bull restoration is objective - the techniques for restoration are based on mathematical or probabilistic models of image degradation

Color image processing

bull fundamental concept in color models

bull basic color processing in a digital domain

Wavelets and multiresolution processing

representing images in various degree of resolution

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Compression

reducing the storage required to save an image or the bandwidth required to transmit it

Morphological processing

bull tools for extracting image components that are useful in the representation and description of shape

bull a transition from processes that output images to processes that outputimage attributes

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Segmentation

bull partitioning an image into its constituents parts or objects

bull autonomous segmentation is one of the most difficult tasks of DIP

bull the more accurate the segmentation the more likley recognition is to succeed

Representation and description (almost always follows segmentation)

bull segmentation produces either the boundary of a region or all the poits in the region itself

bull converting the data produced by segmentation to a form suitable for computer processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

bull boundary representation the focus is on external shape characteristics such as corners or inflections

bull complete region the focus is on internal properties such as texture or skeletal shape

bull description is also called feature extraction ndash extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another

Object recognition

the process of assigning a label (eg bdquovehiclerdquo) to an object based on itsdescriptors

Knowledge database

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Simplified diagramof a cross sectionof the human eye

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Three membranes enclose the eye the cornea and sclera outer cover the choroid the retina

The cornea is a tough transparent tissue that covers the anterior surface of the eye

Continuous with the cornea the sclera is an opaque membrane that enclosesthe remainder of the optic globe

The choroid lies directly below the sclera This membrane contains a network of blood vessels (major nutrition of the eye) The choroid is pigmented and help reduce the amount of light entering the eyeThe choroid is divided (at its anterior extreme) into the ciliary body and the irisThe iris contracts and expands to control the amount of light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to ciliary body (60-70 water 6 fat protein) The lens is colored in slightly yellow The lens absorbs approximatively 8 of the visiblelight (infrared and ultraviolet light are absorbed by proteins in lens)

The innermost membrane is the retina When the eye is proper focusedlight from an object outside the eye is imaged on the retinaVision is possible because of the distribution of discrete light receptors on the surface of the retina cones and rods (6-7 milion cones 75-150 milion rods)

Cones located in the central part of the retina (fovea) they are sensitive to colors vision of detail each cone is link to its own nervecone vision = photopic or bright-light vision

Fovea = the place where the image of the object of interest falls on

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Rods distributed over al the retina surface several rods are contected toa single nerve not specialized in detail vision serve to give a general overall picture of the filed of viewnot involved in color visionsensitive to low level of illumination

Blind spot region without receptors

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 31: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Microwave Band

The dominant aplication of imaging in the microwave band ndash radar

bull radar has the ability to collect data over virtually any region at any time regardless of weather or ambient light conditions

bull some radar waves can penetrate clouds under certain conditions canpenetrate vegetation ice dry sand

bull sometimes radar is the only way to explore inaccessible regions of theEarthrsquos surface

An imaging radar works like a flash camera it provides its own illumination(microwave pulses) to light an area on the ground and take a snapshot image Instead of a camera lens a radar uses an antenna and a digital device torecord the images In a radar image one can see only the microwave energythat was reflected back toward the radar antenna

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Spaceborne radar image of mountains in southeast Tibet

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Radio Band

medicine astronomy

MRI = Magnetic Resonance Imaging

This technique places the patient in a powerful magnet and passes short pulses of radio waves through his or her body

Each pulse causes a responding pulse of radio waves to be emited by the patinet tissues

The location from which these signals originate and their strength are determined by a computer which produces a 2D picture of a section of the patient

Digital Image ProcessingDigital Image Processing

Week 1Week 1

MRI images of a human knee (left) and spine (right)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Images of the Crab Pulsar covering the electromagnetic spectrum

Gamma X-ray Optical Infrared Radio

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Other Imaging Modalities

acoustic imaging electron microscopy synthetic (computer-generated) imaging

Imaging using sound geological explorations industry medicine

Mineral and oil exploration

For image acquisition over land one of the main approaches is to use a large truck an a large flat steel plate The plate is pressed on the ground by the truck and the truck is vibratedthrough a frequency spectrum up to 100 Hz The strength and the speed of the returning sound waves are determined by the composition of the Earth below the surface These are analysed by a computer and images are generated from the resulting analysis

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - iris

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - fingerprint

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Face detection and recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gender identification

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image morphing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Fundamental Steps in DIP

methods whose input and output are images

methods whose inputs are images but whose outputs are attributes extracted from those images

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are images

bull image acquisition

bull image filtering and enhancement

bull image restoration

bull color image processing

bull wavelets and multiresolution processing

bull compression

bull morphological processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are attributes

bull morphological processing

bull segmentation

bull representation and description

bull object recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image acquisition - may involve preprocessing such as scaling

Image enhancement

bull manipulating an image so that the result is more suitable than the original for a specific operation

bull enhancement is problem oriented

bull there is no general sbquotheoryrsquo of image enhancement

bull enhancement use subjective methods for image emprovement

bull enhancement is based on human subjective preferences regarding what is a bdquogoodrdquo enhancement result

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image restoration

bull improving the appearance of an image

bull restoration is objective - the techniques for restoration are based on mathematical or probabilistic models of image degradation

Color image processing

bull fundamental concept in color models

bull basic color processing in a digital domain

Wavelets and multiresolution processing

representing images in various degree of resolution

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Compression

reducing the storage required to save an image or the bandwidth required to transmit it

Morphological processing

bull tools for extracting image components that are useful in the representation and description of shape

bull a transition from processes that output images to processes that outputimage attributes

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Segmentation

bull partitioning an image into its constituents parts or objects

bull autonomous segmentation is one of the most difficult tasks of DIP

bull the more accurate the segmentation the more likley recognition is to succeed

Representation and description (almost always follows segmentation)

bull segmentation produces either the boundary of a region or all the poits in the region itself

bull converting the data produced by segmentation to a form suitable for computer processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

bull boundary representation the focus is on external shape characteristics such as corners or inflections

bull complete region the focus is on internal properties such as texture or skeletal shape

bull description is also called feature extraction ndash extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another

Object recognition

the process of assigning a label (eg bdquovehiclerdquo) to an object based on itsdescriptors

Knowledge database

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Simplified diagramof a cross sectionof the human eye

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Three membranes enclose the eye the cornea and sclera outer cover the choroid the retina

The cornea is a tough transparent tissue that covers the anterior surface of the eye

Continuous with the cornea the sclera is an opaque membrane that enclosesthe remainder of the optic globe

The choroid lies directly below the sclera This membrane contains a network of blood vessels (major nutrition of the eye) The choroid is pigmented and help reduce the amount of light entering the eyeThe choroid is divided (at its anterior extreme) into the ciliary body and the irisThe iris contracts and expands to control the amount of light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to ciliary body (60-70 water 6 fat protein) The lens is colored in slightly yellow The lens absorbs approximatively 8 of the visiblelight (infrared and ultraviolet light are absorbed by proteins in lens)

The innermost membrane is the retina When the eye is proper focusedlight from an object outside the eye is imaged on the retinaVision is possible because of the distribution of discrete light receptors on the surface of the retina cones and rods (6-7 milion cones 75-150 milion rods)

Cones located in the central part of the retina (fovea) they are sensitive to colors vision of detail each cone is link to its own nervecone vision = photopic or bright-light vision

Fovea = the place where the image of the object of interest falls on

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Rods distributed over al the retina surface several rods are contected toa single nerve not specialized in detail vision serve to give a general overall picture of the filed of viewnot involved in color visionsensitive to low level of illumination

Blind spot region without receptors

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 32: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Microwave Band

The dominant aplication of imaging in the microwave band ndash radar

bull radar has the ability to collect data over virtually any region at any time regardless of weather or ambient light conditions

bull some radar waves can penetrate clouds under certain conditions canpenetrate vegetation ice dry sand

bull sometimes radar is the only way to explore inaccessible regions of theEarthrsquos surface

An imaging radar works like a flash camera it provides its own illumination(microwave pulses) to light an area on the ground and take a snapshot image Instead of a camera lens a radar uses an antenna and a digital device torecord the images In a radar image one can see only the microwave energythat was reflected back toward the radar antenna

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Spaceborne radar image of mountains in southeast Tibet

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Radio Band

medicine astronomy

MRI = Magnetic Resonance Imaging

This technique places the patient in a powerful magnet and passes short pulses of radio waves through his or her body

Each pulse causes a responding pulse of radio waves to be emited by the patinet tissues

The location from which these signals originate and their strength are determined by a computer which produces a 2D picture of a section of the patient

Digital Image ProcessingDigital Image Processing

Week 1Week 1

MRI images of a human knee (left) and spine (right)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Images of the Crab Pulsar covering the electromagnetic spectrum

Gamma X-ray Optical Infrared Radio

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Other Imaging Modalities

acoustic imaging electron microscopy synthetic (computer-generated) imaging

Imaging using sound geological explorations industry medicine

Mineral and oil exploration

For image acquisition over land one of the main approaches is to use a large truck an a large flat steel plate The plate is pressed on the ground by the truck and the truck is vibratedthrough a frequency spectrum up to 100 Hz The strength and the speed of the returning sound waves are determined by the composition of the Earth below the surface These are analysed by a computer and images are generated from the resulting analysis

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - iris

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - fingerprint

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Face detection and recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gender identification

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image morphing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Fundamental Steps in DIP

methods whose input and output are images

methods whose inputs are images but whose outputs are attributes extracted from those images

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are images

bull image acquisition

bull image filtering and enhancement

bull image restoration

bull color image processing

bull wavelets and multiresolution processing

bull compression

bull morphological processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are attributes

bull morphological processing

bull segmentation

bull representation and description

bull object recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image acquisition - may involve preprocessing such as scaling

Image enhancement

bull manipulating an image so that the result is more suitable than the original for a specific operation

bull enhancement is problem oriented

bull there is no general sbquotheoryrsquo of image enhancement

bull enhancement use subjective methods for image emprovement

bull enhancement is based on human subjective preferences regarding what is a bdquogoodrdquo enhancement result

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image restoration

bull improving the appearance of an image

bull restoration is objective - the techniques for restoration are based on mathematical or probabilistic models of image degradation

Color image processing

bull fundamental concept in color models

bull basic color processing in a digital domain

Wavelets and multiresolution processing

representing images in various degree of resolution

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Compression

reducing the storage required to save an image or the bandwidth required to transmit it

Morphological processing

bull tools for extracting image components that are useful in the representation and description of shape

bull a transition from processes that output images to processes that outputimage attributes

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Segmentation

bull partitioning an image into its constituents parts or objects

bull autonomous segmentation is one of the most difficult tasks of DIP

bull the more accurate the segmentation the more likley recognition is to succeed

Representation and description (almost always follows segmentation)

bull segmentation produces either the boundary of a region or all the poits in the region itself

bull converting the data produced by segmentation to a form suitable for computer processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

bull boundary representation the focus is on external shape characteristics such as corners or inflections

bull complete region the focus is on internal properties such as texture or skeletal shape

bull description is also called feature extraction ndash extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another

Object recognition

the process of assigning a label (eg bdquovehiclerdquo) to an object based on itsdescriptors

Knowledge database

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Simplified diagramof a cross sectionof the human eye

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Three membranes enclose the eye the cornea and sclera outer cover the choroid the retina

The cornea is a tough transparent tissue that covers the anterior surface of the eye

Continuous with the cornea the sclera is an opaque membrane that enclosesthe remainder of the optic globe

The choroid lies directly below the sclera This membrane contains a network of blood vessels (major nutrition of the eye) The choroid is pigmented and help reduce the amount of light entering the eyeThe choroid is divided (at its anterior extreme) into the ciliary body and the irisThe iris contracts and expands to control the amount of light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to ciliary body (60-70 water 6 fat protein) The lens is colored in slightly yellow The lens absorbs approximatively 8 of the visiblelight (infrared and ultraviolet light are absorbed by proteins in lens)

The innermost membrane is the retina When the eye is proper focusedlight from an object outside the eye is imaged on the retinaVision is possible because of the distribution of discrete light receptors on the surface of the retina cones and rods (6-7 milion cones 75-150 milion rods)

Cones located in the central part of the retina (fovea) they are sensitive to colors vision of detail each cone is link to its own nervecone vision = photopic or bright-light vision

Fovea = the place where the image of the object of interest falls on

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Rods distributed over al the retina surface several rods are contected toa single nerve not specialized in detail vision serve to give a general overall picture of the filed of viewnot involved in color visionsensitive to low level of illumination

Blind spot region without receptors

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 33: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Spaceborne radar image of mountains in southeast Tibet

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Radio Band

medicine astronomy

MRI = Magnetic Resonance Imaging

This technique places the patient in a powerful magnet and passes short pulses of radio waves through his or her body

Each pulse causes a responding pulse of radio waves to be emited by the patinet tissues

The location from which these signals originate and their strength are determined by a computer which produces a 2D picture of a section of the patient

Digital Image ProcessingDigital Image Processing

Week 1Week 1

MRI images of a human knee (left) and spine (right)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Images of the Crab Pulsar covering the electromagnetic spectrum

Gamma X-ray Optical Infrared Radio

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Other Imaging Modalities

acoustic imaging electron microscopy synthetic (computer-generated) imaging

Imaging using sound geological explorations industry medicine

Mineral and oil exploration

For image acquisition over land one of the main approaches is to use a large truck an a large flat steel plate The plate is pressed on the ground by the truck and the truck is vibratedthrough a frequency spectrum up to 100 Hz The strength and the speed of the returning sound waves are determined by the composition of the Earth below the surface These are analysed by a computer and images are generated from the resulting analysis

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - iris

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - fingerprint

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Face detection and recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gender identification

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image morphing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Fundamental Steps in DIP

methods whose input and output are images

methods whose inputs are images but whose outputs are attributes extracted from those images

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are images

bull image acquisition

bull image filtering and enhancement

bull image restoration

bull color image processing

bull wavelets and multiresolution processing

bull compression

bull morphological processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are attributes

bull morphological processing

bull segmentation

bull representation and description

bull object recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image acquisition - may involve preprocessing such as scaling

Image enhancement

bull manipulating an image so that the result is more suitable than the original for a specific operation

bull enhancement is problem oriented

bull there is no general sbquotheoryrsquo of image enhancement

bull enhancement use subjective methods for image emprovement

bull enhancement is based on human subjective preferences regarding what is a bdquogoodrdquo enhancement result

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image restoration

bull improving the appearance of an image

bull restoration is objective - the techniques for restoration are based on mathematical or probabilistic models of image degradation

Color image processing

bull fundamental concept in color models

bull basic color processing in a digital domain

Wavelets and multiresolution processing

representing images in various degree of resolution

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Compression

reducing the storage required to save an image or the bandwidth required to transmit it

Morphological processing

bull tools for extracting image components that are useful in the representation and description of shape

bull a transition from processes that output images to processes that outputimage attributes

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Segmentation

bull partitioning an image into its constituents parts or objects

bull autonomous segmentation is one of the most difficult tasks of DIP

bull the more accurate the segmentation the more likley recognition is to succeed

Representation and description (almost always follows segmentation)

bull segmentation produces either the boundary of a region or all the poits in the region itself

bull converting the data produced by segmentation to a form suitable for computer processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

bull boundary representation the focus is on external shape characteristics such as corners or inflections

bull complete region the focus is on internal properties such as texture or skeletal shape

bull description is also called feature extraction ndash extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another

Object recognition

the process of assigning a label (eg bdquovehiclerdquo) to an object based on itsdescriptors

Knowledge database

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Simplified diagramof a cross sectionof the human eye

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Three membranes enclose the eye the cornea and sclera outer cover the choroid the retina

The cornea is a tough transparent tissue that covers the anterior surface of the eye

Continuous with the cornea the sclera is an opaque membrane that enclosesthe remainder of the optic globe

The choroid lies directly below the sclera This membrane contains a network of blood vessels (major nutrition of the eye) The choroid is pigmented and help reduce the amount of light entering the eyeThe choroid is divided (at its anterior extreme) into the ciliary body and the irisThe iris contracts and expands to control the amount of light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to ciliary body (60-70 water 6 fat protein) The lens is colored in slightly yellow The lens absorbs approximatively 8 of the visiblelight (infrared and ultraviolet light are absorbed by proteins in lens)

The innermost membrane is the retina When the eye is proper focusedlight from an object outside the eye is imaged on the retinaVision is possible because of the distribution of discrete light receptors on the surface of the retina cones and rods (6-7 milion cones 75-150 milion rods)

Cones located in the central part of the retina (fovea) they are sensitive to colors vision of detail each cone is link to its own nervecone vision = photopic or bright-light vision

Fovea = the place where the image of the object of interest falls on

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Rods distributed over al the retina surface several rods are contected toa single nerve not specialized in detail vision serve to give a general overall picture of the filed of viewnot involved in color visionsensitive to low level of illumination

Blind spot region without receptors

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 34: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Imaging in the Radio Band

medicine astronomy

MRI = Magnetic Resonance Imaging

This technique places the patient in a powerful magnet and passes short pulses of radio waves through his or her body

Each pulse causes a responding pulse of radio waves to be emited by the patinet tissues

The location from which these signals originate and their strength are determined by a computer which produces a 2D picture of a section of the patient

Digital Image ProcessingDigital Image Processing

Week 1Week 1

MRI images of a human knee (left) and spine (right)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Images of the Crab Pulsar covering the electromagnetic spectrum

Gamma X-ray Optical Infrared Radio

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Other Imaging Modalities

acoustic imaging electron microscopy synthetic (computer-generated) imaging

Imaging using sound geological explorations industry medicine

Mineral and oil exploration

For image acquisition over land one of the main approaches is to use a large truck an a large flat steel plate The plate is pressed on the ground by the truck and the truck is vibratedthrough a frequency spectrum up to 100 Hz The strength and the speed of the returning sound waves are determined by the composition of the Earth below the surface These are analysed by a computer and images are generated from the resulting analysis

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - iris

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - fingerprint

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Face detection and recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gender identification

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image morphing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Fundamental Steps in DIP

methods whose input and output are images

methods whose inputs are images but whose outputs are attributes extracted from those images

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are images

bull image acquisition

bull image filtering and enhancement

bull image restoration

bull color image processing

bull wavelets and multiresolution processing

bull compression

bull morphological processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are attributes

bull morphological processing

bull segmentation

bull representation and description

bull object recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image acquisition - may involve preprocessing such as scaling

Image enhancement

bull manipulating an image so that the result is more suitable than the original for a specific operation

bull enhancement is problem oriented

bull there is no general sbquotheoryrsquo of image enhancement

bull enhancement use subjective methods for image emprovement

bull enhancement is based on human subjective preferences regarding what is a bdquogoodrdquo enhancement result

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image restoration

bull improving the appearance of an image

bull restoration is objective - the techniques for restoration are based on mathematical or probabilistic models of image degradation

Color image processing

bull fundamental concept in color models

bull basic color processing in a digital domain

Wavelets and multiresolution processing

representing images in various degree of resolution

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Compression

reducing the storage required to save an image or the bandwidth required to transmit it

Morphological processing

bull tools for extracting image components that are useful in the representation and description of shape

bull a transition from processes that output images to processes that outputimage attributes

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Segmentation

bull partitioning an image into its constituents parts or objects

bull autonomous segmentation is one of the most difficult tasks of DIP

bull the more accurate the segmentation the more likley recognition is to succeed

Representation and description (almost always follows segmentation)

bull segmentation produces either the boundary of a region or all the poits in the region itself

bull converting the data produced by segmentation to a form suitable for computer processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

bull boundary representation the focus is on external shape characteristics such as corners or inflections

bull complete region the focus is on internal properties such as texture or skeletal shape

bull description is also called feature extraction ndash extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another

Object recognition

the process of assigning a label (eg bdquovehiclerdquo) to an object based on itsdescriptors

Knowledge database

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Simplified diagramof a cross sectionof the human eye

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Three membranes enclose the eye the cornea and sclera outer cover the choroid the retina

The cornea is a tough transparent tissue that covers the anterior surface of the eye

Continuous with the cornea the sclera is an opaque membrane that enclosesthe remainder of the optic globe

The choroid lies directly below the sclera This membrane contains a network of blood vessels (major nutrition of the eye) The choroid is pigmented and help reduce the amount of light entering the eyeThe choroid is divided (at its anterior extreme) into the ciliary body and the irisThe iris contracts and expands to control the amount of light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to ciliary body (60-70 water 6 fat protein) The lens is colored in slightly yellow The lens absorbs approximatively 8 of the visiblelight (infrared and ultraviolet light are absorbed by proteins in lens)

The innermost membrane is the retina When the eye is proper focusedlight from an object outside the eye is imaged on the retinaVision is possible because of the distribution of discrete light receptors on the surface of the retina cones and rods (6-7 milion cones 75-150 milion rods)

Cones located in the central part of the retina (fovea) they are sensitive to colors vision of detail each cone is link to its own nervecone vision = photopic or bright-light vision

Fovea = the place where the image of the object of interest falls on

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Rods distributed over al the retina surface several rods are contected toa single nerve not specialized in detail vision serve to give a general overall picture of the filed of viewnot involved in color visionsensitive to low level of illumination

Blind spot region without receptors

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 35: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

MRI images of a human knee (left) and spine (right)

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Images of the Crab Pulsar covering the electromagnetic spectrum

Gamma X-ray Optical Infrared Radio

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Other Imaging Modalities

acoustic imaging electron microscopy synthetic (computer-generated) imaging

Imaging using sound geological explorations industry medicine

Mineral and oil exploration

For image acquisition over land one of the main approaches is to use a large truck an a large flat steel plate The plate is pressed on the ground by the truck and the truck is vibratedthrough a frequency spectrum up to 100 Hz The strength and the speed of the returning sound waves are determined by the composition of the Earth below the surface These are analysed by a computer and images are generated from the resulting analysis

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - iris

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - fingerprint

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Face detection and recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gender identification

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image morphing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Fundamental Steps in DIP

methods whose input and output are images

methods whose inputs are images but whose outputs are attributes extracted from those images

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are images

bull image acquisition

bull image filtering and enhancement

bull image restoration

bull color image processing

bull wavelets and multiresolution processing

bull compression

bull morphological processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are attributes

bull morphological processing

bull segmentation

bull representation and description

bull object recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image acquisition - may involve preprocessing such as scaling

Image enhancement

bull manipulating an image so that the result is more suitable than the original for a specific operation

bull enhancement is problem oriented

bull there is no general sbquotheoryrsquo of image enhancement

bull enhancement use subjective methods for image emprovement

bull enhancement is based on human subjective preferences regarding what is a bdquogoodrdquo enhancement result

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image restoration

bull improving the appearance of an image

bull restoration is objective - the techniques for restoration are based on mathematical or probabilistic models of image degradation

Color image processing

bull fundamental concept in color models

bull basic color processing in a digital domain

Wavelets and multiresolution processing

representing images in various degree of resolution

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Compression

reducing the storage required to save an image or the bandwidth required to transmit it

Morphological processing

bull tools for extracting image components that are useful in the representation and description of shape

bull a transition from processes that output images to processes that outputimage attributes

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Segmentation

bull partitioning an image into its constituents parts or objects

bull autonomous segmentation is one of the most difficult tasks of DIP

bull the more accurate the segmentation the more likley recognition is to succeed

Representation and description (almost always follows segmentation)

bull segmentation produces either the boundary of a region or all the poits in the region itself

bull converting the data produced by segmentation to a form suitable for computer processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

bull boundary representation the focus is on external shape characteristics such as corners or inflections

bull complete region the focus is on internal properties such as texture or skeletal shape

bull description is also called feature extraction ndash extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another

Object recognition

the process of assigning a label (eg bdquovehiclerdquo) to an object based on itsdescriptors

Knowledge database

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Simplified diagramof a cross sectionof the human eye

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Three membranes enclose the eye the cornea and sclera outer cover the choroid the retina

The cornea is a tough transparent tissue that covers the anterior surface of the eye

Continuous with the cornea the sclera is an opaque membrane that enclosesthe remainder of the optic globe

The choroid lies directly below the sclera This membrane contains a network of blood vessels (major nutrition of the eye) The choroid is pigmented and help reduce the amount of light entering the eyeThe choroid is divided (at its anterior extreme) into the ciliary body and the irisThe iris contracts and expands to control the amount of light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to ciliary body (60-70 water 6 fat protein) The lens is colored in slightly yellow The lens absorbs approximatively 8 of the visiblelight (infrared and ultraviolet light are absorbed by proteins in lens)

The innermost membrane is the retina When the eye is proper focusedlight from an object outside the eye is imaged on the retinaVision is possible because of the distribution of discrete light receptors on the surface of the retina cones and rods (6-7 milion cones 75-150 milion rods)

Cones located in the central part of the retina (fovea) they are sensitive to colors vision of detail each cone is link to its own nervecone vision = photopic or bright-light vision

Fovea = the place where the image of the object of interest falls on

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Rods distributed over al the retina surface several rods are contected toa single nerve not specialized in detail vision serve to give a general overall picture of the filed of viewnot involved in color visionsensitive to low level of illumination

Blind spot region without receptors

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 36: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Images of the Crab Pulsar covering the electromagnetic spectrum

Gamma X-ray Optical Infrared Radio

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Other Imaging Modalities

acoustic imaging electron microscopy synthetic (computer-generated) imaging

Imaging using sound geological explorations industry medicine

Mineral and oil exploration

For image acquisition over land one of the main approaches is to use a large truck an a large flat steel plate The plate is pressed on the ground by the truck and the truck is vibratedthrough a frequency spectrum up to 100 Hz The strength and the speed of the returning sound waves are determined by the composition of the Earth below the surface These are analysed by a computer and images are generated from the resulting analysis

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - iris

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - fingerprint

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Face detection and recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gender identification

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image morphing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Fundamental Steps in DIP

methods whose input and output are images

methods whose inputs are images but whose outputs are attributes extracted from those images

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are images

bull image acquisition

bull image filtering and enhancement

bull image restoration

bull color image processing

bull wavelets and multiresolution processing

bull compression

bull morphological processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are attributes

bull morphological processing

bull segmentation

bull representation and description

bull object recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image acquisition - may involve preprocessing such as scaling

Image enhancement

bull manipulating an image so that the result is more suitable than the original for a specific operation

bull enhancement is problem oriented

bull there is no general sbquotheoryrsquo of image enhancement

bull enhancement use subjective methods for image emprovement

bull enhancement is based on human subjective preferences regarding what is a bdquogoodrdquo enhancement result

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image restoration

bull improving the appearance of an image

bull restoration is objective - the techniques for restoration are based on mathematical or probabilistic models of image degradation

Color image processing

bull fundamental concept in color models

bull basic color processing in a digital domain

Wavelets and multiresolution processing

representing images in various degree of resolution

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Compression

reducing the storage required to save an image or the bandwidth required to transmit it

Morphological processing

bull tools for extracting image components that are useful in the representation and description of shape

bull a transition from processes that output images to processes that outputimage attributes

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Segmentation

bull partitioning an image into its constituents parts or objects

bull autonomous segmentation is one of the most difficult tasks of DIP

bull the more accurate the segmentation the more likley recognition is to succeed

Representation and description (almost always follows segmentation)

bull segmentation produces either the boundary of a region or all the poits in the region itself

bull converting the data produced by segmentation to a form suitable for computer processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

bull boundary representation the focus is on external shape characteristics such as corners or inflections

bull complete region the focus is on internal properties such as texture or skeletal shape

bull description is also called feature extraction ndash extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another

Object recognition

the process of assigning a label (eg bdquovehiclerdquo) to an object based on itsdescriptors

Knowledge database

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Simplified diagramof a cross sectionof the human eye

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Three membranes enclose the eye the cornea and sclera outer cover the choroid the retina

The cornea is a tough transparent tissue that covers the anterior surface of the eye

Continuous with the cornea the sclera is an opaque membrane that enclosesthe remainder of the optic globe

The choroid lies directly below the sclera This membrane contains a network of blood vessels (major nutrition of the eye) The choroid is pigmented and help reduce the amount of light entering the eyeThe choroid is divided (at its anterior extreme) into the ciliary body and the irisThe iris contracts and expands to control the amount of light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to ciliary body (60-70 water 6 fat protein) The lens is colored in slightly yellow The lens absorbs approximatively 8 of the visiblelight (infrared and ultraviolet light are absorbed by proteins in lens)

The innermost membrane is the retina When the eye is proper focusedlight from an object outside the eye is imaged on the retinaVision is possible because of the distribution of discrete light receptors on the surface of the retina cones and rods (6-7 milion cones 75-150 milion rods)

Cones located in the central part of the retina (fovea) they are sensitive to colors vision of detail each cone is link to its own nervecone vision = photopic or bright-light vision

Fovea = the place where the image of the object of interest falls on

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Rods distributed over al the retina surface several rods are contected toa single nerve not specialized in detail vision serve to give a general overall picture of the filed of viewnot involved in color visionsensitive to low level of illumination

Blind spot region without receptors

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 37: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Other Imaging Modalities

acoustic imaging electron microscopy synthetic (computer-generated) imaging

Imaging using sound geological explorations industry medicine

Mineral and oil exploration

For image acquisition over land one of the main approaches is to use a large truck an a large flat steel plate The plate is pressed on the ground by the truck and the truck is vibratedthrough a frequency spectrum up to 100 Hz The strength and the speed of the returning sound waves are determined by the composition of the Earth below the surface These are analysed by a computer and images are generated from the resulting analysis

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - iris

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - fingerprint

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Face detection and recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gender identification

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image morphing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Fundamental Steps in DIP

methods whose input and output are images

methods whose inputs are images but whose outputs are attributes extracted from those images

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are images

bull image acquisition

bull image filtering and enhancement

bull image restoration

bull color image processing

bull wavelets and multiresolution processing

bull compression

bull morphological processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are attributes

bull morphological processing

bull segmentation

bull representation and description

bull object recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image acquisition - may involve preprocessing such as scaling

Image enhancement

bull manipulating an image so that the result is more suitable than the original for a specific operation

bull enhancement is problem oriented

bull there is no general sbquotheoryrsquo of image enhancement

bull enhancement use subjective methods for image emprovement

bull enhancement is based on human subjective preferences regarding what is a bdquogoodrdquo enhancement result

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image restoration

bull improving the appearance of an image

bull restoration is objective - the techniques for restoration are based on mathematical or probabilistic models of image degradation

Color image processing

bull fundamental concept in color models

bull basic color processing in a digital domain

Wavelets and multiresolution processing

representing images in various degree of resolution

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Compression

reducing the storage required to save an image or the bandwidth required to transmit it

Morphological processing

bull tools for extracting image components that are useful in the representation and description of shape

bull a transition from processes that output images to processes that outputimage attributes

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Segmentation

bull partitioning an image into its constituents parts or objects

bull autonomous segmentation is one of the most difficult tasks of DIP

bull the more accurate the segmentation the more likley recognition is to succeed

Representation and description (almost always follows segmentation)

bull segmentation produces either the boundary of a region or all the poits in the region itself

bull converting the data produced by segmentation to a form suitable for computer processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

bull boundary representation the focus is on external shape characteristics such as corners or inflections

bull complete region the focus is on internal properties such as texture or skeletal shape

bull description is also called feature extraction ndash extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another

Object recognition

the process of assigning a label (eg bdquovehiclerdquo) to an object based on itsdescriptors

Knowledge database

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Simplified diagramof a cross sectionof the human eye

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Three membranes enclose the eye the cornea and sclera outer cover the choroid the retina

The cornea is a tough transparent tissue that covers the anterior surface of the eye

Continuous with the cornea the sclera is an opaque membrane that enclosesthe remainder of the optic globe

The choroid lies directly below the sclera This membrane contains a network of blood vessels (major nutrition of the eye) The choroid is pigmented and help reduce the amount of light entering the eyeThe choroid is divided (at its anterior extreme) into the ciliary body and the irisThe iris contracts and expands to control the amount of light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to ciliary body (60-70 water 6 fat protein) The lens is colored in slightly yellow The lens absorbs approximatively 8 of the visiblelight (infrared and ultraviolet light are absorbed by proteins in lens)

The innermost membrane is the retina When the eye is proper focusedlight from an object outside the eye is imaged on the retinaVision is possible because of the distribution of discrete light receptors on the surface of the retina cones and rods (6-7 milion cones 75-150 milion rods)

Cones located in the central part of the retina (fovea) they are sensitive to colors vision of detail each cone is link to its own nervecone vision = photopic or bright-light vision

Fovea = the place where the image of the object of interest falls on

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Rods distributed over al the retina surface several rods are contected toa single nerve not specialized in detail vision serve to give a general overall picture of the filed of viewnot involved in color visionsensitive to low level of illumination

Blind spot region without receptors

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 38: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - iris

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - fingerprint

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Face detection and recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gender identification

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image morphing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Fundamental Steps in DIP

methods whose input and output are images

methods whose inputs are images but whose outputs are attributes extracted from those images

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are images

bull image acquisition

bull image filtering and enhancement

bull image restoration

bull color image processing

bull wavelets and multiresolution processing

bull compression

bull morphological processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are attributes

bull morphological processing

bull segmentation

bull representation and description

bull object recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image acquisition - may involve preprocessing such as scaling

Image enhancement

bull manipulating an image so that the result is more suitable than the original for a specific operation

bull enhancement is problem oriented

bull there is no general sbquotheoryrsquo of image enhancement

bull enhancement use subjective methods for image emprovement

bull enhancement is based on human subjective preferences regarding what is a bdquogoodrdquo enhancement result

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image restoration

bull improving the appearance of an image

bull restoration is objective - the techniques for restoration are based on mathematical or probabilistic models of image degradation

Color image processing

bull fundamental concept in color models

bull basic color processing in a digital domain

Wavelets and multiresolution processing

representing images in various degree of resolution

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Compression

reducing the storage required to save an image or the bandwidth required to transmit it

Morphological processing

bull tools for extracting image components that are useful in the representation and description of shape

bull a transition from processes that output images to processes that outputimage attributes

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Segmentation

bull partitioning an image into its constituents parts or objects

bull autonomous segmentation is one of the most difficult tasks of DIP

bull the more accurate the segmentation the more likley recognition is to succeed

Representation and description (almost always follows segmentation)

bull segmentation produces either the boundary of a region or all the poits in the region itself

bull converting the data produced by segmentation to a form suitable for computer processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

bull boundary representation the focus is on external shape characteristics such as corners or inflections

bull complete region the focus is on internal properties such as texture or skeletal shape

bull description is also called feature extraction ndash extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another

Object recognition

the process of assigning a label (eg bdquovehiclerdquo) to an object based on itsdescriptors

Knowledge database

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Simplified diagramof a cross sectionof the human eye

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Three membranes enclose the eye the cornea and sclera outer cover the choroid the retina

The cornea is a tough transparent tissue that covers the anterior surface of the eye

Continuous with the cornea the sclera is an opaque membrane that enclosesthe remainder of the optic globe

The choroid lies directly below the sclera This membrane contains a network of blood vessels (major nutrition of the eye) The choroid is pigmented and help reduce the amount of light entering the eyeThe choroid is divided (at its anterior extreme) into the ciliary body and the irisThe iris contracts and expands to control the amount of light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to ciliary body (60-70 water 6 fat protein) The lens is colored in slightly yellow The lens absorbs approximatively 8 of the visiblelight (infrared and ultraviolet light are absorbed by proteins in lens)

The innermost membrane is the retina When the eye is proper focusedlight from an object outside the eye is imaged on the retinaVision is possible because of the distribution of discrete light receptors on the surface of the retina cones and rods (6-7 milion cones 75-150 milion rods)

Cones located in the central part of the retina (fovea) they are sensitive to colors vision of detail each cone is link to its own nervecone vision = photopic or bright-light vision

Fovea = the place where the image of the object of interest falls on

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Rods distributed over al the retina surface several rods are contected toa single nerve not specialized in detail vision serve to give a general overall picture of the filed of viewnot involved in color visionsensitive to low level of illumination

Blind spot region without receptors

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 39: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - iris

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - fingerprint

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Face detection and recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gender identification

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image morphing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Fundamental Steps in DIP

methods whose input and output are images

methods whose inputs are images but whose outputs are attributes extracted from those images

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are images

bull image acquisition

bull image filtering and enhancement

bull image restoration

bull color image processing

bull wavelets and multiresolution processing

bull compression

bull morphological processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are attributes

bull morphological processing

bull segmentation

bull representation and description

bull object recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image acquisition - may involve preprocessing such as scaling

Image enhancement

bull manipulating an image so that the result is more suitable than the original for a specific operation

bull enhancement is problem oriented

bull there is no general sbquotheoryrsquo of image enhancement

bull enhancement use subjective methods for image emprovement

bull enhancement is based on human subjective preferences regarding what is a bdquogoodrdquo enhancement result

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image restoration

bull improving the appearance of an image

bull restoration is objective - the techniques for restoration are based on mathematical or probabilistic models of image degradation

Color image processing

bull fundamental concept in color models

bull basic color processing in a digital domain

Wavelets and multiresolution processing

representing images in various degree of resolution

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Compression

reducing the storage required to save an image or the bandwidth required to transmit it

Morphological processing

bull tools for extracting image components that are useful in the representation and description of shape

bull a transition from processes that output images to processes that outputimage attributes

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Segmentation

bull partitioning an image into its constituents parts or objects

bull autonomous segmentation is one of the most difficult tasks of DIP

bull the more accurate the segmentation the more likley recognition is to succeed

Representation and description (almost always follows segmentation)

bull segmentation produces either the boundary of a region or all the poits in the region itself

bull converting the data produced by segmentation to a form suitable for computer processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

bull boundary representation the focus is on external shape characteristics such as corners or inflections

bull complete region the focus is on internal properties such as texture or skeletal shape

bull description is also called feature extraction ndash extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another

Object recognition

the process of assigning a label (eg bdquovehiclerdquo) to an object based on itsdescriptors

Knowledge database

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Simplified diagramof a cross sectionof the human eye

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Three membranes enclose the eye the cornea and sclera outer cover the choroid the retina

The cornea is a tough transparent tissue that covers the anterior surface of the eye

Continuous with the cornea the sclera is an opaque membrane that enclosesthe remainder of the optic globe

The choroid lies directly below the sclera This membrane contains a network of blood vessels (major nutrition of the eye) The choroid is pigmented and help reduce the amount of light entering the eyeThe choroid is divided (at its anterior extreme) into the ciliary body and the irisThe iris contracts and expands to control the amount of light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to ciliary body (60-70 water 6 fat protein) The lens is colored in slightly yellow The lens absorbs approximatively 8 of the visiblelight (infrared and ultraviolet light are absorbed by proteins in lens)

The innermost membrane is the retina When the eye is proper focusedlight from an object outside the eye is imaged on the retinaVision is possible because of the distribution of discrete light receptors on the surface of the retina cones and rods (6-7 milion cones 75-150 milion rods)

Cones located in the central part of the retina (fovea) they are sensitive to colors vision of detail each cone is link to its own nervecone vision = photopic or bright-light vision

Fovea = the place where the image of the object of interest falls on

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Rods distributed over al the retina surface several rods are contected toa single nerve not specialized in detail vision serve to give a general overall picture of the filed of viewnot involved in color visionsensitive to low level of illumination

Blind spot region without receptors

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 40: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Biometry - fingerprint

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Face detection and recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gender identification

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image morphing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Fundamental Steps in DIP

methods whose input and output are images

methods whose inputs are images but whose outputs are attributes extracted from those images

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are images

bull image acquisition

bull image filtering and enhancement

bull image restoration

bull color image processing

bull wavelets and multiresolution processing

bull compression

bull morphological processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are attributes

bull morphological processing

bull segmentation

bull representation and description

bull object recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image acquisition - may involve preprocessing such as scaling

Image enhancement

bull manipulating an image so that the result is more suitable than the original for a specific operation

bull enhancement is problem oriented

bull there is no general sbquotheoryrsquo of image enhancement

bull enhancement use subjective methods for image emprovement

bull enhancement is based on human subjective preferences regarding what is a bdquogoodrdquo enhancement result

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image restoration

bull improving the appearance of an image

bull restoration is objective - the techniques for restoration are based on mathematical or probabilistic models of image degradation

Color image processing

bull fundamental concept in color models

bull basic color processing in a digital domain

Wavelets and multiresolution processing

representing images in various degree of resolution

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Compression

reducing the storage required to save an image or the bandwidth required to transmit it

Morphological processing

bull tools for extracting image components that are useful in the representation and description of shape

bull a transition from processes that output images to processes that outputimage attributes

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Segmentation

bull partitioning an image into its constituents parts or objects

bull autonomous segmentation is one of the most difficult tasks of DIP

bull the more accurate the segmentation the more likley recognition is to succeed

Representation and description (almost always follows segmentation)

bull segmentation produces either the boundary of a region or all the poits in the region itself

bull converting the data produced by segmentation to a form suitable for computer processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

bull boundary representation the focus is on external shape characteristics such as corners or inflections

bull complete region the focus is on internal properties such as texture or skeletal shape

bull description is also called feature extraction ndash extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another

Object recognition

the process of assigning a label (eg bdquovehiclerdquo) to an object based on itsdescriptors

Knowledge database

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Simplified diagramof a cross sectionof the human eye

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Three membranes enclose the eye the cornea and sclera outer cover the choroid the retina

The cornea is a tough transparent tissue that covers the anterior surface of the eye

Continuous with the cornea the sclera is an opaque membrane that enclosesthe remainder of the optic globe

The choroid lies directly below the sclera This membrane contains a network of blood vessels (major nutrition of the eye) The choroid is pigmented and help reduce the amount of light entering the eyeThe choroid is divided (at its anterior extreme) into the ciliary body and the irisThe iris contracts and expands to control the amount of light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to ciliary body (60-70 water 6 fat protein) The lens is colored in slightly yellow The lens absorbs approximatively 8 of the visiblelight (infrared and ultraviolet light are absorbed by proteins in lens)

The innermost membrane is the retina When the eye is proper focusedlight from an object outside the eye is imaged on the retinaVision is possible because of the distribution of discrete light receptors on the surface of the retina cones and rods (6-7 milion cones 75-150 milion rods)

Cones located in the central part of the retina (fovea) they are sensitive to colors vision of detail each cone is link to its own nervecone vision = photopic or bright-light vision

Fovea = the place where the image of the object of interest falls on

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Rods distributed over al the retina surface several rods are contected toa single nerve not specialized in detail vision serve to give a general overall picture of the filed of viewnot involved in color visionsensitive to low level of illumination

Blind spot region without receptors

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 41: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Face detection and recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gender identification

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image morphing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Fundamental Steps in DIP

methods whose input and output are images

methods whose inputs are images but whose outputs are attributes extracted from those images

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are images

bull image acquisition

bull image filtering and enhancement

bull image restoration

bull color image processing

bull wavelets and multiresolution processing

bull compression

bull morphological processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are attributes

bull morphological processing

bull segmentation

bull representation and description

bull object recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image acquisition - may involve preprocessing such as scaling

Image enhancement

bull manipulating an image so that the result is more suitable than the original for a specific operation

bull enhancement is problem oriented

bull there is no general sbquotheoryrsquo of image enhancement

bull enhancement use subjective methods for image emprovement

bull enhancement is based on human subjective preferences regarding what is a bdquogoodrdquo enhancement result

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image restoration

bull improving the appearance of an image

bull restoration is objective - the techniques for restoration are based on mathematical or probabilistic models of image degradation

Color image processing

bull fundamental concept in color models

bull basic color processing in a digital domain

Wavelets and multiresolution processing

representing images in various degree of resolution

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Compression

reducing the storage required to save an image or the bandwidth required to transmit it

Morphological processing

bull tools for extracting image components that are useful in the representation and description of shape

bull a transition from processes that output images to processes that outputimage attributes

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Segmentation

bull partitioning an image into its constituents parts or objects

bull autonomous segmentation is one of the most difficult tasks of DIP

bull the more accurate the segmentation the more likley recognition is to succeed

Representation and description (almost always follows segmentation)

bull segmentation produces either the boundary of a region or all the poits in the region itself

bull converting the data produced by segmentation to a form suitable for computer processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

bull boundary representation the focus is on external shape characteristics such as corners or inflections

bull complete region the focus is on internal properties such as texture or skeletal shape

bull description is also called feature extraction ndash extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another

Object recognition

the process of assigning a label (eg bdquovehiclerdquo) to an object based on itsdescriptors

Knowledge database

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Simplified diagramof a cross sectionof the human eye

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Three membranes enclose the eye the cornea and sclera outer cover the choroid the retina

The cornea is a tough transparent tissue that covers the anterior surface of the eye

Continuous with the cornea the sclera is an opaque membrane that enclosesthe remainder of the optic globe

The choroid lies directly below the sclera This membrane contains a network of blood vessels (major nutrition of the eye) The choroid is pigmented and help reduce the amount of light entering the eyeThe choroid is divided (at its anterior extreme) into the ciliary body and the irisThe iris contracts and expands to control the amount of light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to ciliary body (60-70 water 6 fat protein) The lens is colored in slightly yellow The lens absorbs approximatively 8 of the visiblelight (infrared and ultraviolet light are absorbed by proteins in lens)

The innermost membrane is the retina When the eye is proper focusedlight from an object outside the eye is imaged on the retinaVision is possible because of the distribution of discrete light receptors on the surface of the retina cones and rods (6-7 milion cones 75-150 milion rods)

Cones located in the central part of the retina (fovea) they are sensitive to colors vision of detail each cone is link to its own nervecone vision = photopic or bright-light vision

Fovea = the place where the image of the object of interest falls on

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Rods distributed over al the retina surface several rods are contected toa single nerve not specialized in detail vision serve to give a general overall picture of the filed of viewnot involved in color visionsensitive to low level of illumination

Blind spot region without receptors

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 42: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Gender identification

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image morphing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Fundamental Steps in DIP

methods whose input and output are images

methods whose inputs are images but whose outputs are attributes extracted from those images

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are images

bull image acquisition

bull image filtering and enhancement

bull image restoration

bull color image processing

bull wavelets and multiresolution processing

bull compression

bull morphological processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are attributes

bull morphological processing

bull segmentation

bull representation and description

bull object recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image acquisition - may involve preprocessing such as scaling

Image enhancement

bull manipulating an image so that the result is more suitable than the original for a specific operation

bull enhancement is problem oriented

bull there is no general sbquotheoryrsquo of image enhancement

bull enhancement use subjective methods for image emprovement

bull enhancement is based on human subjective preferences regarding what is a bdquogoodrdquo enhancement result

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image restoration

bull improving the appearance of an image

bull restoration is objective - the techniques for restoration are based on mathematical or probabilistic models of image degradation

Color image processing

bull fundamental concept in color models

bull basic color processing in a digital domain

Wavelets and multiresolution processing

representing images in various degree of resolution

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Compression

reducing the storage required to save an image or the bandwidth required to transmit it

Morphological processing

bull tools for extracting image components that are useful in the representation and description of shape

bull a transition from processes that output images to processes that outputimage attributes

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Segmentation

bull partitioning an image into its constituents parts or objects

bull autonomous segmentation is one of the most difficult tasks of DIP

bull the more accurate the segmentation the more likley recognition is to succeed

Representation and description (almost always follows segmentation)

bull segmentation produces either the boundary of a region or all the poits in the region itself

bull converting the data produced by segmentation to a form suitable for computer processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

bull boundary representation the focus is on external shape characteristics such as corners or inflections

bull complete region the focus is on internal properties such as texture or skeletal shape

bull description is also called feature extraction ndash extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another

Object recognition

the process of assigning a label (eg bdquovehiclerdquo) to an object based on itsdescriptors

Knowledge database

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Simplified diagramof a cross sectionof the human eye

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Three membranes enclose the eye the cornea and sclera outer cover the choroid the retina

The cornea is a tough transparent tissue that covers the anterior surface of the eye

Continuous with the cornea the sclera is an opaque membrane that enclosesthe remainder of the optic globe

The choroid lies directly below the sclera This membrane contains a network of blood vessels (major nutrition of the eye) The choroid is pigmented and help reduce the amount of light entering the eyeThe choroid is divided (at its anterior extreme) into the ciliary body and the irisThe iris contracts and expands to control the amount of light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to ciliary body (60-70 water 6 fat protein) The lens is colored in slightly yellow The lens absorbs approximatively 8 of the visiblelight (infrared and ultraviolet light are absorbed by proteins in lens)

The innermost membrane is the retina When the eye is proper focusedlight from an object outside the eye is imaged on the retinaVision is possible because of the distribution of discrete light receptors on the surface of the retina cones and rods (6-7 milion cones 75-150 milion rods)

Cones located in the central part of the retina (fovea) they are sensitive to colors vision of detail each cone is link to its own nervecone vision = photopic or bright-light vision

Fovea = the place where the image of the object of interest falls on

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Rods distributed over al the retina surface several rods are contected toa single nerve not specialized in detail vision serve to give a general overall picture of the filed of viewnot involved in color visionsensitive to low level of illumination

Blind spot region without receptors

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 43: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image morphing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Fundamental Steps in DIP

methods whose input and output are images

methods whose inputs are images but whose outputs are attributes extracted from those images

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are images

bull image acquisition

bull image filtering and enhancement

bull image restoration

bull color image processing

bull wavelets and multiresolution processing

bull compression

bull morphological processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are attributes

bull morphological processing

bull segmentation

bull representation and description

bull object recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image acquisition - may involve preprocessing such as scaling

Image enhancement

bull manipulating an image so that the result is more suitable than the original for a specific operation

bull enhancement is problem oriented

bull there is no general sbquotheoryrsquo of image enhancement

bull enhancement use subjective methods for image emprovement

bull enhancement is based on human subjective preferences regarding what is a bdquogoodrdquo enhancement result

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image restoration

bull improving the appearance of an image

bull restoration is objective - the techniques for restoration are based on mathematical or probabilistic models of image degradation

Color image processing

bull fundamental concept in color models

bull basic color processing in a digital domain

Wavelets and multiresolution processing

representing images in various degree of resolution

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Compression

reducing the storage required to save an image or the bandwidth required to transmit it

Morphological processing

bull tools for extracting image components that are useful in the representation and description of shape

bull a transition from processes that output images to processes that outputimage attributes

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Segmentation

bull partitioning an image into its constituents parts or objects

bull autonomous segmentation is one of the most difficult tasks of DIP

bull the more accurate the segmentation the more likley recognition is to succeed

Representation and description (almost always follows segmentation)

bull segmentation produces either the boundary of a region or all the poits in the region itself

bull converting the data produced by segmentation to a form suitable for computer processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

bull boundary representation the focus is on external shape characteristics such as corners or inflections

bull complete region the focus is on internal properties such as texture or skeletal shape

bull description is also called feature extraction ndash extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another

Object recognition

the process of assigning a label (eg bdquovehiclerdquo) to an object based on itsdescriptors

Knowledge database

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Simplified diagramof a cross sectionof the human eye

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Three membranes enclose the eye the cornea and sclera outer cover the choroid the retina

The cornea is a tough transparent tissue that covers the anterior surface of the eye

Continuous with the cornea the sclera is an opaque membrane that enclosesthe remainder of the optic globe

The choroid lies directly below the sclera This membrane contains a network of blood vessels (major nutrition of the eye) The choroid is pigmented and help reduce the amount of light entering the eyeThe choroid is divided (at its anterior extreme) into the ciliary body and the irisThe iris contracts and expands to control the amount of light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to ciliary body (60-70 water 6 fat protein) The lens is colored in slightly yellow The lens absorbs approximatively 8 of the visiblelight (infrared and ultraviolet light are absorbed by proteins in lens)

The innermost membrane is the retina When the eye is proper focusedlight from an object outside the eye is imaged on the retinaVision is possible because of the distribution of discrete light receptors on the surface of the retina cones and rods (6-7 milion cones 75-150 milion rods)

Cones located in the central part of the retina (fovea) they are sensitive to colors vision of detail each cone is link to its own nervecone vision = photopic or bright-light vision

Fovea = the place where the image of the object of interest falls on

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Rods distributed over al the retina surface several rods are contected toa single nerve not specialized in detail vision serve to give a general overall picture of the filed of viewnot involved in color visionsensitive to low level of illumination

Blind spot region without receptors

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 44: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Fundamental Steps in DIP

methods whose input and output are images

methods whose inputs are images but whose outputs are attributes extracted from those images

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are images

bull image acquisition

bull image filtering and enhancement

bull image restoration

bull color image processing

bull wavelets and multiresolution processing

bull compression

bull morphological processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are attributes

bull morphological processing

bull segmentation

bull representation and description

bull object recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image acquisition - may involve preprocessing such as scaling

Image enhancement

bull manipulating an image so that the result is more suitable than the original for a specific operation

bull enhancement is problem oriented

bull there is no general sbquotheoryrsquo of image enhancement

bull enhancement use subjective methods for image emprovement

bull enhancement is based on human subjective preferences regarding what is a bdquogoodrdquo enhancement result

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image restoration

bull improving the appearance of an image

bull restoration is objective - the techniques for restoration are based on mathematical or probabilistic models of image degradation

Color image processing

bull fundamental concept in color models

bull basic color processing in a digital domain

Wavelets and multiresolution processing

representing images in various degree of resolution

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Compression

reducing the storage required to save an image or the bandwidth required to transmit it

Morphological processing

bull tools for extracting image components that are useful in the representation and description of shape

bull a transition from processes that output images to processes that outputimage attributes

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Segmentation

bull partitioning an image into its constituents parts or objects

bull autonomous segmentation is one of the most difficult tasks of DIP

bull the more accurate the segmentation the more likley recognition is to succeed

Representation and description (almost always follows segmentation)

bull segmentation produces either the boundary of a region or all the poits in the region itself

bull converting the data produced by segmentation to a form suitable for computer processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

bull boundary representation the focus is on external shape characteristics such as corners or inflections

bull complete region the focus is on internal properties such as texture or skeletal shape

bull description is also called feature extraction ndash extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another

Object recognition

the process of assigning a label (eg bdquovehiclerdquo) to an object based on itsdescriptors

Knowledge database

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Simplified diagramof a cross sectionof the human eye

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Three membranes enclose the eye the cornea and sclera outer cover the choroid the retina

The cornea is a tough transparent tissue that covers the anterior surface of the eye

Continuous with the cornea the sclera is an opaque membrane that enclosesthe remainder of the optic globe

The choroid lies directly below the sclera This membrane contains a network of blood vessels (major nutrition of the eye) The choroid is pigmented and help reduce the amount of light entering the eyeThe choroid is divided (at its anterior extreme) into the ciliary body and the irisThe iris contracts and expands to control the amount of light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to ciliary body (60-70 water 6 fat protein) The lens is colored in slightly yellow The lens absorbs approximatively 8 of the visiblelight (infrared and ultraviolet light are absorbed by proteins in lens)

The innermost membrane is the retina When the eye is proper focusedlight from an object outside the eye is imaged on the retinaVision is possible because of the distribution of discrete light receptors on the surface of the retina cones and rods (6-7 milion cones 75-150 milion rods)

Cones located in the central part of the retina (fovea) they are sensitive to colors vision of detail each cone is link to its own nervecone vision = photopic or bright-light vision

Fovea = the place where the image of the object of interest falls on

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Rods distributed over al the retina surface several rods are contected toa single nerve not specialized in detail vision serve to give a general overall picture of the filed of viewnot involved in color visionsensitive to low level of illumination

Blind spot region without receptors

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 45: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are images

bull image acquisition

bull image filtering and enhancement

bull image restoration

bull color image processing

bull wavelets and multiresolution processing

bull compression

bull morphological processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are attributes

bull morphological processing

bull segmentation

bull representation and description

bull object recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image acquisition - may involve preprocessing such as scaling

Image enhancement

bull manipulating an image so that the result is more suitable than the original for a specific operation

bull enhancement is problem oriented

bull there is no general sbquotheoryrsquo of image enhancement

bull enhancement use subjective methods for image emprovement

bull enhancement is based on human subjective preferences regarding what is a bdquogoodrdquo enhancement result

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image restoration

bull improving the appearance of an image

bull restoration is objective - the techniques for restoration are based on mathematical or probabilistic models of image degradation

Color image processing

bull fundamental concept in color models

bull basic color processing in a digital domain

Wavelets and multiresolution processing

representing images in various degree of resolution

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Compression

reducing the storage required to save an image or the bandwidth required to transmit it

Morphological processing

bull tools for extracting image components that are useful in the representation and description of shape

bull a transition from processes that output images to processes that outputimage attributes

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Segmentation

bull partitioning an image into its constituents parts or objects

bull autonomous segmentation is one of the most difficult tasks of DIP

bull the more accurate the segmentation the more likley recognition is to succeed

Representation and description (almost always follows segmentation)

bull segmentation produces either the boundary of a region or all the poits in the region itself

bull converting the data produced by segmentation to a form suitable for computer processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

bull boundary representation the focus is on external shape characteristics such as corners or inflections

bull complete region the focus is on internal properties such as texture or skeletal shape

bull description is also called feature extraction ndash extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another

Object recognition

the process of assigning a label (eg bdquovehiclerdquo) to an object based on itsdescriptors

Knowledge database

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Simplified diagramof a cross sectionof the human eye

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Three membranes enclose the eye the cornea and sclera outer cover the choroid the retina

The cornea is a tough transparent tissue that covers the anterior surface of the eye

Continuous with the cornea the sclera is an opaque membrane that enclosesthe remainder of the optic globe

The choroid lies directly below the sclera This membrane contains a network of blood vessels (major nutrition of the eye) The choroid is pigmented and help reduce the amount of light entering the eyeThe choroid is divided (at its anterior extreme) into the ciliary body and the irisThe iris contracts and expands to control the amount of light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to ciliary body (60-70 water 6 fat protein) The lens is colored in slightly yellow The lens absorbs approximatively 8 of the visiblelight (infrared and ultraviolet light are absorbed by proteins in lens)

The innermost membrane is the retina When the eye is proper focusedlight from an object outside the eye is imaged on the retinaVision is possible because of the distribution of discrete light receptors on the surface of the retina cones and rods (6-7 milion cones 75-150 milion rods)

Cones located in the central part of the retina (fovea) they are sensitive to colors vision of detail each cone is link to its own nervecone vision = photopic or bright-light vision

Fovea = the place where the image of the object of interest falls on

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Rods distributed over al the retina surface several rods are contected toa single nerve not specialized in detail vision serve to give a general overall picture of the filed of viewnot involved in color visionsensitive to low level of illumination

Blind spot region without receptors

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 46: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Outputs are attributes

bull morphological processing

bull segmentation

bull representation and description

bull object recognition

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image acquisition - may involve preprocessing such as scaling

Image enhancement

bull manipulating an image so that the result is more suitable than the original for a specific operation

bull enhancement is problem oriented

bull there is no general sbquotheoryrsquo of image enhancement

bull enhancement use subjective methods for image emprovement

bull enhancement is based on human subjective preferences regarding what is a bdquogoodrdquo enhancement result

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image restoration

bull improving the appearance of an image

bull restoration is objective - the techniques for restoration are based on mathematical or probabilistic models of image degradation

Color image processing

bull fundamental concept in color models

bull basic color processing in a digital domain

Wavelets and multiresolution processing

representing images in various degree of resolution

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Compression

reducing the storage required to save an image or the bandwidth required to transmit it

Morphological processing

bull tools for extracting image components that are useful in the representation and description of shape

bull a transition from processes that output images to processes that outputimage attributes

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Segmentation

bull partitioning an image into its constituents parts or objects

bull autonomous segmentation is one of the most difficult tasks of DIP

bull the more accurate the segmentation the more likley recognition is to succeed

Representation and description (almost always follows segmentation)

bull segmentation produces either the boundary of a region or all the poits in the region itself

bull converting the data produced by segmentation to a form suitable for computer processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

bull boundary representation the focus is on external shape characteristics such as corners or inflections

bull complete region the focus is on internal properties such as texture or skeletal shape

bull description is also called feature extraction ndash extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another

Object recognition

the process of assigning a label (eg bdquovehiclerdquo) to an object based on itsdescriptors

Knowledge database

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Simplified diagramof a cross sectionof the human eye

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Three membranes enclose the eye the cornea and sclera outer cover the choroid the retina

The cornea is a tough transparent tissue that covers the anterior surface of the eye

Continuous with the cornea the sclera is an opaque membrane that enclosesthe remainder of the optic globe

The choroid lies directly below the sclera This membrane contains a network of blood vessels (major nutrition of the eye) The choroid is pigmented and help reduce the amount of light entering the eyeThe choroid is divided (at its anterior extreme) into the ciliary body and the irisThe iris contracts and expands to control the amount of light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to ciliary body (60-70 water 6 fat protein) The lens is colored in slightly yellow The lens absorbs approximatively 8 of the visiblelight (infrared and ultraviolet light are absorbed by proteins in lens)

The innermost membrane is the retina When the eye is proper focusedlight from an object outside the eye is imaged on the retinaVision is possible because of the distribution of discrete light receptors on the surface of the retina cones and rods (6-7 milion cones 75-150 milion rods)

Cones located in the central part of the retina (fovea) they are sensitive to colors vision of detail each cone is link to its own nervecone vision = photopic or bright-light vision

Fovea = the place where the image of the object of interest falls on

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Rods distributed over al the retina surface several rods are contected toa single nerve not specialized in detail vision serve to give a general overall picture of the filed of viewnot involved in color visionsensitive to low level of illumination

Blind spot region without receptors

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 47: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image acquisition - may involve preprocessing such as scaling

Image enhancement

bull manipulating an image so that the result is more suitable than the original for a specific operation

bull enhancement is problem oriented

bull there is no general sbquotheoryrsquo of image enhancement

bull enhancement use subjective methods for image emprovement

bull enhancement is based on human subjective preferences regarding what is a bdquogoodrdquo enhancement result

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image restoration

bull improving the appearance of an image

bull restoration is objective - the techniques for restoration are based on mathematical or probabilistic models of image degradation

Color image processing

bull fundamental concept in color models

bull basic color processing in a digital domain

Wavelets and multiresolution processing

representing images in various degree of resolution

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Compression

reducing the storage required to save an image or the bandwidth required to transmit it

Morphological processing

bull tools for extracting image components that are useful in the representation and description of shape

bull a transition from processes that output images to processes that outputimage attributes

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Segmentation

bull partitioning an image into its constituents parts or objects

bull autonomous segmentation is one of the most difficult tasks of DIP

bull the more accurate the segmentation the more likley recognition is to succeed

Representation and description (almost always follows segmentation)

bull segmentation produces either the boundary of a region or all the poits in the region itself

bull converting the data produced by segmentation to a form suitable for computer processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

bull boundary representation the focus is on external shape characteristics such as corners or inflections

bull complete region the focus is on internal properties such as texture or skeletal shape

bull description is also called feature extraction ndash extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another

Object recognition

the process of assigning a label (eg bdquovehiclerdquo) to an object based on itsdescriptors

Knowledge database

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Simplified diagramof a cross sectionof the human eye

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Three membranes enclose the eye the cornea and sclera outer cover the choroid the retina

The cornea is a tough transparent tissue that covers the anterior surface of the eye

Continuous with the cornea the sclera is an opaque membrane that enclosesthe remainder of the optic globe

The choroid lies directly below the sclera This membrane contains a network of blood vessels (major nutrition of the eye) The choroid is pigmented and help reduce the amount of light entering the eyeThe choroid is divided (at its anterior extreme) into the ciliary body and the irisThe iris contracts and expands to control the amount of light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to ciliary body (60-70 water 6 fat protein) The lens is colored in slightly yellow The lens absorbs approximatively 8 of the visiblelight (infrared and ultraviolet light are absorbed by proteins in lens)

The innermost membrane is the retina When the eye is proper focusedlight from an object outside the eye is imaged on the retinaVision is possible because of the distribution of discrete light receptors on the surface of the retina cones and rods (6-7 milion cones 75-150 milion rods)

Cones located in the central part of the retina (fovea) they are sensitive to colors vision of detail each cone is link to its own nervecone vision = photopic or bright-light vision

Fovea = the place where the image of the object of interest falls on

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Rods distributed over al the retina surface several rods are contected toa single nerve not specialized in detail vision serve to give a general overall picture of the filed of viewnot involved in color visionsensitive to low level of illumination

Blind spot region without receptors

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 48: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image restoration

bull improving the appearance of an image

bull restoration is objective - the techniques for restoration are based on mathematical or probabilistic models of image degradation

Color image processing

bull fundamental concept in color models

bull basic color processing in a digital domain

Wavelets and multiresolution processing

representing images in various degree of resolution

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Compression

reducing the storage required to save an image or the bandwidth required to transmit it

Morphological processing

bull tools for extracting image components that are useful in the representation and description of shape

bull a transition from processes that output images to processes that outputimage attributes

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Segmentation

bull partitioning an image into its constituents parts or objects

bull autonomous segmentation is one of the most difficult tasks of DIP

bull the more accurate the segmentation the more likley recognition is to succeed

Representation and description (almost always follows segmentation)

bull segmentation produces either the boundary of a region or all the poits in the region itself

bull converting the data produced by segmentation to a form suitable for computer processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

bull boundary representation the focus is on external shape characteristics such as corners or inflections

bull complete region the focus is on internal properties such as texture or skeletal shape

bull description is also called feature extraction ndash extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another

Object recognition

the process of assigning a label (eg bdquovehiclerdquo) to an object based on itsdescriptors

Knowledge database

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Simplified diagramof a cross sectionof the human eye

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Three membranes enclose the eye the cornea and sclera outer cover the choroid the retina

The cornea is a tough transparent tissue that covers the anterior surface of the eye

Continuous with the cornea the sclera is an opaque membrane that enclosesthe remainder of the optic globe

The choroid lies directly below the sclera This membrane contains a network of blood vessels (major nutrition of the eye) The choroid is pigmented and help reduce the amount of light entering the eyeThe choroid is divided (at its anterior extreme) into the ciliary body and the irisThe iris contracts and expands to control the amount of light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to ciliary body (60-70 water 6 fat protein) The lens is colored in slightly yellow The lens absorbs approximatively 8 of the visiblelight (infrared and ultraviolet light are absorbed by proteins in lens)

The innermost membrane is the retina When the eye is proper focusedlight from an object outside the eye is imaged on the retinaVision is possible because of the distribution of discrete light receptors on the surface of the retina cones and rods (6-7 milion cones 75-150 milion rods)

Cones located in the central part of the retina (fovea) they are sensitive to colors vision of detail each cone is link to its own nervecone vision = photopic or bright-light vision

Fovea = the place where the image of the object of interest falls on

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Rods distributed over al the retina surface several rods are contected toa single nerve not specialized in detail vision serve to give a general overall picture of the filed of viewnot involved in color visionsensitive to low level of illumination

Blind spot region without receptors

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 49: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Compression

reducing the storage required to save an image or the bandwidth required to transmit it

Morphological processing

bull tools for extracting image components that are useful in the representation and description of shape

bull a transition from processes that output images to processes that outputimage attributes

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Segmentation

bull partitioning an image into its constituents parts or objects

bull autonomous segmentation is one of the most difficult tasks of DIP

bull the more accurate the segmentation the more likley recognition is to succeed

Representation and description (almost always follows segmentation)

bull segmentation produces either the boundary of a region or all the poits in the region itself

bull converting the data produced by segmentation to a form suitable for computer processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

bull boundary representation the focus is on external shape characteristics such as corners or inflections

bull complete region the focus is on internal properties such as texture or skeletal shape

bull description is also called feature extraction ndash extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another

Object recognition

the process of assigning a label (eg bdquovehiclerdquo) to an object based on itsdescriptors

Knowledge database

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Simplified diagramof a cross sectionof the human eye

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Three membranes enclose the eye the cornea and sclera outer cover the choroid the retina

The cornea is a tough transparent tissue that covers the anterior surface of the eye

Continuous with the cornea the sclera is an opaque membrane that enclosesthe remainder of the optic globe

The choroid lies directly below the sclera This membrane contains a network of blood vessels (major nutrition of the eye) The choroid is pigmented and help reduce the amount of light entering the eyeThe choroid is divided (at its anterior extreme) into the ciliary body and the irisThe iris contracts and expands to control the amount of light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to ciliary body (60-70 water 6 fat protein) The lens is colored in slightly yellow The lens absorbs approximatively 8 of the visiblelight (infrared and ultraviolet light are absorbed by proteins in lens)

The innermost membrane is the retina When the eye is proper focusedlight from an object outside the eye is imaged on the retinaVision is possible because of the distribution of discrete light receptors on the surface of the retina cones and rods (6-7 milion cones 75-150 milion rods)

Cones located in the central part of the retina (fovea) they are sensitive to colors vision of detail each cone is link to its own nervecone vision = photopic or bright-light vision

Fovea = the place where the image of the object of interest falls on

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Rods distributed over al the retina surface several rods are contected toa single nerve not specialized in detail vision serve to give a general overall picture of the filed of viewnot involved in color visionsensitive to low level of illumination

Blind spot region without receptors

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 50: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Segmentation

bull partitioning an image into its constituents parts or objects

bull autonomous segmentation is one of the most difficult tasks of DIP

bull the more accurate the segmentation the more likley recognition is to succeed

Representation and description (almost always follows segmentation)

bull segmentation produces either the boundary of a region or all the poits in the region itself

bull converting the data produced by segmentation to a form suitable for computer processing

Digital Image ProcessingDigital Image Processing

Week 1Week 1

bull boundary representation the focus is on external shape characteristics such as corners or inflections

bull complete region the focus is on internal properties such as texture or skeletal shape

bull description is also called feature extraction ndash extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another

Object recognition

the process of assigning a label (eg bdquovehiclerdquo) to an object based on itsdescriptors

Knowledge database

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Simplified diagramof a cross sectionof the human eye

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Three membranes enclose the eye the cornea and sclera outer cover the choroid the retina

The cornea is a tough transparent tissue that covers the anterior surface of the eye

Continuous with the cornea the sclera is an opaque membrane that enclosesthe remainder of the optic globe

The choroid lies directly below the sclera This membrane contains a network of blood vessels (major nutrition of the eye) The choroid is pigmented and help reduce the amount of light entering the eyeThe choroid is divided (at its anterior extreme) into the ciliary body and the irisThe iris contracts and expands to control the amount of light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to ciliary body (60-70 water 6 fat protein) The lens is colored in slightly yellow The lens absorbs approximatively 8 of the visiblelight (infrared and ultraviolet light are absorbed by proteins in lens)

The innermost membrane is the retina When the eye is proper focusedlight from an object outside the eye is imaged on the retinaVision is possible because of the distribution of discrete light receptors on the surface of the retina cones and rods (6-7 milion cones 75-150 milion rods)

Cones located in the central part of the retina (fovea) they are sensitive to colors vision of detail each cone is link to its own nervecone vision = photopic or bright-light vision

Fovea = the place where the image of the object of interest falls on

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Rods distributed over al the retina surface several rods are contected toa single nerve not specialized in detail vision serve to give a general overall picture of the filed of viewnot involved in color visionsensitive to low level of illumination

Blind spot region without receptors

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 51: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

bull boundary representation the focus is on external shape characteristics such as corners or inflections

bull complete region the focus is on internal properties such as texture or skeletal shape

bull description is also called feature extraction ndash extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another

Object recognition

the process of assigning a label (eg bdquovehiclerdquo) to an object based on itsdescriptors

Knowledge database

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Simplified diagramof a cross sectionof the human eye

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Three membranes enclose the eye the cornea and sclera outer cover the choroid the retina

The cornea is a tough transparent tissue that covers the anterior surface of the eye

Continuous with the cornea the sclera is an opaque membrane that enclosesthe remainder of the optic globe

The choroid lies directly below the sclera This membrane contains a network of blood vessels (major nutrition of the eye) The choroid is pigmented and help reduce the amount of light entering the eyeThe choroid is divided (at its anterior extreme) into the ciliary body and the irisThe iris contracts and expands to control the amount of light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to ciliary body (60-70 water 6 fat protein) The lens is colored in slightly yellow The lens absorbs approximatively 8 of the visiblelight (infrared and ultraviolet light are absorbed by proteins in lens)

The innermost membrane is the retina When the eye is proper focusedlight from an object outside the eye is imaged on the retinaVision is possible because of the distribution of discrete light receptors on the surface of the retina cones and rods (6-7 milion cones 75-150 milion rods)

Cones located in the central part of the retina (fovea) they are sensitive to colors vision of detail each cone is link to its own nervecone vision = photopic or bright-light vision

Fovea = the place where the image of the object of interest falls on

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Rods distributed over al the retina surface several rods are contected toa single nerve not specialized in detail vision serve to give a general overall picture of the filed of viewnot involved in color visionsensitive to low level of illumination

Blind spot region without receptors

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 52: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Simplified diagramof a cross sectionof the human eye

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Three membranes enclose the eye the cornea and sclera outer cover the choroid the retina

The cornea is a tough transparent tissue that covers the anterior surface of the eye

Continuous with the cornea the sclera is an opaque membrane that enclosesthe remainder of the optic globe

The choroid lies directly below the sclera This membrane contains a network of blood vessels (major nutrition of the eye) The choroid is pigmented and help reduce the amount of light entering the eyeThe choroid is divided (at its anterior extreme) into the ciliary body and the irisThe iris contracts and expands to control the amount of light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to ciliary body (60-70 water 6 fat protein) The lens is colored in slightly yellow The lens absorbs approximatively 8 of the visiblelight (infrared and ultraviolet light are absorbed by proteins in lens)

The innermost membrane is the retina When the eye is proper focusedlight from an object outside the eye is imaged on the retinaVision is possible because of the distribution of discrete light receptors on the surface of the retina cones and rods (6-7 milion cones 75-150 milion rods)

Cones located in the central part of the retina (fovea) they are sensitive to colors vision of detail each cone is link to its own nervecone vision = photopic or bright-light vision

Fovea = the place where the image of the object of interest falls on

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Rods distributed over al the retina surface several rods are contected toa single nerve not specialized in detail vision serve to give a general overall picture of the filed of viewnot involved in color visionsensitive to low level of illumination

Blind spot region without receptors

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 53: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Three membranes enclose the eye the cornea and sclera outer cover the choroid the retina

The cornea is a tough transparent tissue that covers the anterior surface of the eye

Continuous with the cornea the sclera is an opaque membrane that enclosesthe remainder of the optic globe

The choroid lies directly below the sclera This membrane contains a network of blood vessels (major nutrition of the eye) The choroid is pigmented and help reduce the amount of light entering the eyeThe choroid is divided (at its anterior extreme) into the ciliary body and the irisThe iris contracts and expands to control the amount of light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to ciliary body (60-70 water 6 fat protein) The lens is colored in slightly yellow The lens absorbs approximatively 8 of the visiblelight (infrared and ultraviolet light are absorbed by proteins in lens)

The innermost membrane is the retina When the eye is proper focusedlight from an object outside the eye is imaged on the retinaVision is possible because of the distribution of discrete light receptors on the surface of the retina cones and rods (6-7 milion cones 75-150 milion rods)

Cones located in the central part of the retina (fovea) they are sensitive to colors vision of detail each cone is link to its own nervecone vision = photopic or bright-light vision

Fovea = the place where the image of the object of interest falls on

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Rods distributed over al the retina surface several rods are contected toa single nerve not specialized in detail vision serve to give a general overall picture of the filed of viewnot involved in color visionsensitive to low level of illumination

Blind spot region without receptors

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 54: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Three membranes enclose the eye the cornea and sclera outer cover the choroid the retina

The cornea is a tough transparent tissue that covers the anterior surface of the eye

Continuous with the cornea the sclera is an opaque membrane that enclosesthe remainder of the optic globe

The choroid lies directly below the sclera This membrane contains a network of blood vessels (major nutrition of the eye) The choroid is pigmented and help reduce the amount of light entering the eyeThe choroid is divided (at its anterior extreme) into the ciliary body and the irisThe iris contracts and expands to control the amount of light

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to ciliary body (60-70 water 6 fat protein) The lens is colored in slightly yellow The lens absorbs approximatively 8 of the visiblelight (infrared and ultraviolet light are absorbed by proteins in lens)

The innermost membrane is the retina When the eye is proper focusedlight from an object outside the eye is imaged on the retinaVision is possible because of the distribution of discrete light receptors on the surface of the retina cones and rods (6-7 milion cones 75-150 milion rods)

Cones located in the central part of the retina (fovea) they are sensitive to colors vision of detail each cone is link to its own nervecone vision = photopic or bright-light vision

Fovea = the place where the image of the object of interest falls on

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Rods distributed over al the retina surface several rods are contected toa single nerve not specialized in detail vision serve to give a general overall picture of the filed of viewnot involved in color visionsensitive to low level of illumination

Blind spot region without receptors

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 55: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to ciliary body (60-70 water 6 fat protein) The lens is colored in slightly yellow The lens absorbs approximatively 8 of the visiblelight (infrared and ultraviolet light are absorbed by proteins in lens)

The innermost membrane is the retina When the eye is proper focusedlight from an object outside the eye is imaged on the retinaVision is possible because of the distribution of discrete light receptors on the surface of the retina cones and rods (6-7 milion cones 75-150 milion rods)

Cones located in the central part of the retina (fovea) they are sensitive to colors vision of detail each cone is link to its own nervecone vision = photopic or bright-light vision

Fovea = the place where the image of the object of interest falls on

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Rods distributed over al the retina surface several rods are contected toa single nerve not specialized in detail vision serve to give a general overall picture of the filed of viewnot involved in color visionsensitive to low level of illumination

Blind spot region without receptors

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 56: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Rods distributed over al the retina surface several rods are contected toa single nerve not specialized in detail vision serve to give a general overall picture of the filed of viewnot involved in color visionsensitive to low level of illumination

Blind spot region without receptors

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 57: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Image formation in the eye

Ordinary photographic camera the lens has fixed focal length focusing atvarious distances is done by modifying the distance between the lens and the image plane (were the film or imaging chip are located)

Human eye the distance between the lens and the retina (the imaging region)is fixed the focal length needed to achieve proper focus is obtained by varyingthe shape of the lens (the fibers in the ciliary body accomplish this flattening or thickening the lens for distant or near objects respectively

distance between lens and retina along visual axix = 17 mm

range of focal length = 14 mm to 17 mm

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 58: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 59: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Illustration of Mach band effectPercieved intensityis not a simple function of actual intensity

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 60: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

All the inner squares have the same intensity but they appear progressively darker as the background becomes lighter

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 61: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Optical illusions

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 62: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

ALBASTRU BLUEVERDE GREENGALBEN YELLOW ROSU REDPORTOCALIU ORANGEGALBEN YELLOWVISINIU BURGUNDYALB WHITE

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 63: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Lightbullachromatic or monochromatic light - light that is void of color the attribute of such light is its intensity or amount

gray level is used to describe monochromatic intensitybecause it ranges from black to grays and to white bull chromatic light spans the electromagnetic energy spectrumfrom approximately 043 to 079 μm

quantities that describe the quality of a chromatic light source radiance

the total amount of energy that flows from the light source and it isusually measured in watts (W) luminance

measured in lumens (lm) gives a measure of the amount of energy an observer perceives from a light source

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 64: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

For example light emitted from a source operating in the far infraredregion of the spectrum could have significant energy (radiance) butan observer would hardly perceive it its luminance would be almostzero

brightnessa subjective descriptor of light perception that is practicallyimpossible to measure It embodies the achromatic notion ofintensity and is one of the key factors in describing color sensation

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 65: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

the physical meaning is determined by the source of the image

( )f D f x y

Image generated from a physical process f(xy) ndash proportional to the energy radiated by the physical source

f(xy) ndash characterized by two components

i(xy) = illumination component the amount of source illumination incident on the scene being viewed

r(xy) = reflectance component the amount of illumination reflected by the objects in the scene

( ) ( ) ( )

0 ( ) 0 ( ) 1

f x y i x y r x y

i x y r x y

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 66: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

r(xy)=0 - total absorption r(xy)=1 - total reflectance

i(xy) ndash determined by the illumination source

r(xy) ndash determined by the characteristics of the imaged objects

is called gray (or intensity) scale

In practice

min 0 0 max min min min max max max( ) L l f x y L L i r L i r

indoor values without additional illuminationmin max10 1000L L

black whitemin max0 1 0 1 0 1L L L L l l L

min maxL L

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 67: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image ProcessingDigital Image Processing

Week 1Week 1

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 68: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

Image Sampling and Quantization

- the output of the sensors is a continuous voltage waveform related to the sensed

scene

converting a continuous image f to digital form

- digitizing (x y) is called sampling

- digitizing f(x y) is called quantization

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 69: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 70: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

Continuous image projected onto a sensor array Result of image sampling and quantization

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 71: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

Representing Digital Images (xy) x = 01hellipM-1 y = 01hellipN-1 ndash spatial variables or spatial coordinates

(00) (01) (0 1)(10) (11) (1 1)

( )

( 10) ( 11) ( 1 1)

f f f Nf f f N

f x y

f M f M f M N

image element pixel

00 01 0 1

10 11 1 1

10 11 1 1

( ) ( )

N

i jN M N

i j

M M M N

a a aa f x i y j f i ja a a

Aa

a a a

f(00) ndash the upper left corner of the image

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 72: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

M N ge 0 L=2k

[0 1]i j i ja a L

Dynamic range of an image = the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system Upper limit ndash determined by saturation lower limit - noise

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 73: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 74: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

Number of bits required to store a digitized image

for 2 b M N k M N b N k

When an image can have 2k intensity levels the image is referred as a k-bit image 256 discrete intensity values ndash 8-bit image

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 75: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

Spatial and Intensity Resolution Spatial resolution ndash the smallest discernible detail in an image

Measures line pairs per unit distance dots (pixels) per unit distance

Image resolution = the largest number of discernible line pairs per unit distance

(eg 100 line pairs per mm)

Dots per unit distance are commonly used in printing and publishing

In US the measure is expressed in dots per inch (dpi)

(newspapers are printed with 75 dpi glossy brochures at 175 dpi)

Intensity resolution ndash the smallest discernible change in intensity level

The number of intensity levels (L) is determined by hardware considerations

L=2k ndash most common k = 8

Intensity resolution in practice is given by k (number of bits used to quantize intensity)

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 76: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

Fig1 Reducing spatial resolution 1250 dpi(upper left) 300 dpi (upper right)

150 dpi (lower left) 72 dpi (lower right)

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 77: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

Reducing the number of gray levels 256 128 64 32

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 78: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

Reducing the number of gray levels 16 8 4 2

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 79: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

Image Interpolation - used in zooming shrinking rotating and geometric corrections

Shrinking zooming ndash image resizing ndash image resampling methods

Interpolation is the process of using known data to estimate values at unknown locations

Suppose we have an image of size 500 times 500 pixels that has to be enlarged 15 times to

750times750 pixels One way to do this is to create an imaginary 750 times 750 grid with the

same spacing as the original and then shrink it so that it fits exactly over the original

image The pixel spacing in the 750 times 750 grid will be less than in the original image

Problem assignment of intensity-level in the new 750 times 750 grid

Nearest neighbor interpolation assign for every point in the new grid (750 times 750) the

intensity of the closest pixel (nearest neighbor) from the oldoriginal grid (500 times 500)

This technique has the tendency to produce undesirable effects like severe distortion of

straight edges

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 80: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

Bilinear interpolation ndash assign for the new (x y) location the following intensity ( )v x y a x b y c x y d

where the four coefficients are determined from the 4 equations in 4 unknowns that can

be written using the 4 nearest neighbors of point (x y)

Bilinear interpolation gives much better results than nearest neighbor interpolation with a

modest increase in computational effort

Bicubic interpolation ndash assign for the new (x y) location an intensity that involves the 16

nearest neighbors of the point 3 3

0 0

( ) i ji j

i jv x y c x y

The coefficients cij are obtained solving a 16x16 linear system

intensity levels of the 16 nearest neighbors of 3 3

0 0

( )i ji j

i jc x y x y

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 81: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

Generally bicubic interpolation does a better job of preserving fine detail than the

bilinear technique Bicubic interpolation is the standard used in commercial image editing

programs such as Adobe Photoshop and Corel Photopaint

Figure 2 (a) is the same as Fig 1 (d) which was obtained by reducing the resolution of

the 1250 dpi in Fig 1(a) to 72 dpi (the size shrank from 3692 times 2812 to 213 times 162) and

then zooming the reduced image back to its original size To generate Fig 1(d) nearest

neighbor interpolation was used (both for shrinking and zooming)

Figures 2(b) and (c) were generated using the same steps but using bilinear and bicubic

interpolation respectively Figures 2(d)+(e)+(f) were obtained by reducing the resolution

from 1250 dpi to 150 dpi (instead of 72 dpi)

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 82: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

Fig 2 ndash Interpolation examples for zooming and shrinking (nearest neighbor linear bicubic)

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 83: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

Neighbors of a Pixel

A pixel p at coordinates (x y) has 4 horizontal and vertical neighbors horizontal vertical( 1 ) ( 1 ) ( 1) ( 1)x y x y x y x y

This set of pixels called the 4-neighbors of p denoted by N4 (p)

The 4 diagonal neighbors of p have coordinates ( 1 1) ( 1 1) ( 1 1) ( 1 1)x y x y x y x y

and are denoted ND(p)

The horizontal vertical and diagonal neighbors are called the 8-neighbors of p denoted

N8 (p)

If (xy) is on the border of the image some of the neighbor locations in ND(p) and N8(p)

fall outside the image

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 84: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

Adjacency Connectivity Regions Boundaries

Denote by V the set of intensity levels used to define adjacency

- in a binary image V 01 (V=0 V=1)

- in a gray-scale image with 256 possible gray-levels V can be any subset of 0255

We consider 3 types of adjacency

(a) 4-adjacency two pixels p and q with values from V are 4-adjacent if 4( )q N p

(b) 8-adjacency two pixels p and q with values from V are 8-adjacent if 8( )q N p

(c) m-adjacency (mixed adjacency) two pixels p and q with values from V are

m-adjacent if

4( )q N p or

( )Dq N p and the set 4 4( ) ( )N p N q has no pixels whose values are from V

Mixed adjacency is a modification of 8-adjacency It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used Consider the example

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 85: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

binary image

0 1 1 0 1 1 0 1 1

1 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1

V

The three pixels at the top (first line) in the above example show multiple (ambiguous)

8-adjacency as indicated by the dashed lines This ambiguity is removed by using

m-adjacency

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 86: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

A (digital) path (or curve) from pixel p with coordinates (xy) to q with coordinates (st)

is a sequence of distinct pixels with coordinates

and are adjacent 0 0 1 1

1 1

( ) ( ) ( ) ( ) ( )( ) ( ) 12

n n

i i i i

x y x y x y x y s tx y x y i n

The length of the path is n If 0 0( ) ( )n nx y x y the path is closed

Depending on the type of adjacency considered the paths are 4- 8- or m-paths

Let S denote a subset of pixels in an image Two pixels p and q are said to be connected

in S if there exists a path between them consisting only of pixels from S

S is a connected set if there is a path in S between any 2 pixels in S

Let R be a subset of pixels in an image R is a region of the image if R is a connected set

Two regions R1 and R2 are said to be adjacent if 1 2R R form a connected set Regions

that are not adjacent are said to be disjoint When referring to regions only 4- and

8-adjacency are considered

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 87: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

Suppose that an image contains K disjoint regions 1 kR k K none of which

touches the image border

the complement of 1

( )K

cu k u u

k

R R R R

We call all the points in Ru the foreground of the image and the points in ( )cuR the

background of the image

The boundary (border or contour) of a region R is the set of points that are adjacent to

points in the complement of R (R)c The border of an image is the set of pixels in the

region that have at least one background neighbor This definition is referred to as the

inner border to distinguish it from the notion of outer border which is the corresponding

border in the background

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 88: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

Distance measures

For pixels p q and z with coordinates (xy) (st) and (vw) respectively D is a distance

function or metric if

(a) D(p q) ge 0 D(p q) = 0 iff p=q

(b) D(p q) = D(q p)

(c) D(p z) le D(p q) + D(q z)

The Euclidean distance between p and q is defined as 1

2 2 2 22( ) ( ) ( ) ( ) ( )eD p q x s y t x s y t

The pixels q for which ( )eD p q r are the points contained in a disk of radius r

centered at (x y)

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 89: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

The D4 distance (also called city-block distance) between p and q is defined as

4( ) | | | |D p q x s y t

The pixels q for which 4( )D p q r form a diamond centered at (xy)

4

22 1 2

2 2 1 0 1 22 1 2

2

D

The pixels with D4 = 1 are the 4-neighbors of (x y)

The D8 distance (called the chessboard distance) between p and q is defined as

8( ) max| | | |D p q x s y t

The pixels q for which 8( )D p q r form a square centered at (x y)

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 90: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

8

2 2 2 2 22 1 1 1 2

2 2 1 0 1 22 1 1 1 22 2 2 2 2

D

The pixels with D8 = 1 are the 8-neighbors of (x y)

D4 and D8 distances are independent of any paths that might exist between p and q

because these distances involve only the coordinates of the point

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 91: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

Array versus Matrix Operations

An array operation involving one or more images is carried out on a pixel-by-pixel basis

11 12 11 12

21 22 21 22

a a b ba a b b

Array product

11 12 11 12 11 11 12 12

21 22 21 22 21 21 22 21

a a b b a b a ba a b b a b a b

Matrix product

11 12 11 12 11 11 12 21 11 12 12 21

21 22 21 22 21 11 22 21 21 12 22 22

a a b b a b a b a b a ba a b b a b a b a b a b

We assume array operations unless stated otherwise

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 92: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

Linear versus Nonlinear Operations

One of the most important classifications of image-processing methods is whether it is

linear or nonlinear

( ) ( )H f x y g x y

H is said to be a linear operator if

images1 2 1 2

1 2

( ) ( ) ( ) ( )

H a f x y b f x y a H f x y b H f x y

a b f f

Example of nonlinear operator

the maximum value of the pixels of image max ( )H f f x y f

1 2

0 2 6 5 1 1

2 3 4 7f f a b

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 93: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

1 2

0 2 6 5 6 3max max 1 ( 1) max 2

2 3 4 7 2 4a f b f

0 2 6 51 max ( 1) max 3 ( 1)7 4

2 3 4 7

Arithmetic Operations in Image Processing

Let g(xy) denote a corrupted image formed by the addition of noise ( ) ( ) ( )g x y f x y x y

f(xy) ndash the noiseless image η(xy) the noise uncorrelated and has 0 average value

For a random variable z with mean m E[(z-m)2] is the variance (E( ∙ ) is the expected

value) The covariance of two random variables z1 and z2 is defined as E[(z1-m1) (z2-m2)]

The two random variables are uncorrelated when their covariance is 0

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 94: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

Objective reduce noise by adding a set of noisy images ( )ig x y (technique frequently

used in image enhancement)

1

1( ) ( )K

ii

g x y g x yK

If the noise satisfies the properties stated above we have

2 2( ) ( )

1( ) ( ) g x y x yE g x y f x yK

( ( ))E g x y is the expected value of g and and 2 2( ) ( )g x y x y are the variances of

and g respectively The standard deviation (square root of the variance) at any point in

the average image is

( ) ( )1

g x y x yK

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 95: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

As K increases the variability (as measured by the variance or the standard deviation) of

the pixel values at each location (x y) decreases Because ( ) ( )E g x y f x y this

means that ( )g x y approaches f(x y) as the number of noisy images used in the

averaging process increases

An important application of image averaging is in the field of astronomy where imaging

under very low light levels frequently causes sensor noise to render single images

virtually useless for analysis Figure 226(a) shows an 8-bit image in which corruption

was simulated by adding to it Gaussian noise with zero mean and a standard deviation of

64 intensity levels Figures 226(b)-(f) show the result of averaging 5 10 20 50 and 100

images respectively

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 96: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

Fig 3 Image of Galaxy Pair NGC 3314 corrupted by additive Gaussian noise (left corner) Results of averaging 5 10 20 50

100 noisy images

a b c d e f

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 97: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

A frequent application of image subtraction is in the enhancement of differences between

images

(a) (b) (c)

Fig 4 (a) Infrared image of Washington DC area (b) Image obtained from (a) by setting to zero the least

significant bit of each pixel (c) the difference between the two images

Figure 4(b) was obtained by setting to zero the least-significant bit of every pixel in

Figure 4(a) The two images seem almost the same Figure 4(c) is the difference between

images (a) and (b) Black (0) values in Figure (c) indicate locations where there is no

difference between images (a) and (b)

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 98: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

Mask mode radiography ( ) ( ) ( )g x y f x y h x y

h(x y) the mask is an X-ray image of a region of a patientrsquos body captured by an

intensified TV camera (instead of traditional X-ray film) located opposite an X-ray

source The procedure consists of injecting an X-ray contrast medium into the patientrsquos

bloodstream taking a series of images called live images (denoted f(x y)) of the same

anatomical region as h(x y) and subtracting the mask from the series of incoming live

images after injection of the contrast medium

In g(x y) we can find the differences between h and f as enhanced detail

Images being captured at TV rates we obtain a movie showing how the contrast medium

propagates through the various arteries in the area being observed

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 99: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

a b c d Fig 5 ndash Angiography ndash subtraction example (a) ndash mask image (b) ndash live image (c) ndash difference between (a) and (b) (d) - image (c) enhanced

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 100: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

An important application of image multiplication (and division) is shading correction

Suppose that an imaging sensor produces images in the form ( ) ( ) ( )g x y f x y h x y

f(x y) ndash the ldquoperfect imagerdquo h(x y) ndash the shading function

When the shading function is known

( )( )( )

g x yf x yh x y

h(x y) is unknown but we have access to the imaging system we can obtain an

approximation to the shading function by imaging a target of constant intensity When the

sensor is not available often the shading pattern can be estimated from the image

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 101: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

(a) (b) (c)

Fig 6 Shading correction (a) ndash Shaded image of a tungsten filament magnified 130 times (b) - shading pattern (c) corrected image

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 102: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

Another use of image multiplication is in masking also called region of interest (ROI)

operations The process consists of multiplying a given image by a mask image that has

1rsquos (white) in the ROI and 0rsquos elsewhere There can be more than one ROI in the mask

image and the shape of the ROI can be arbitrary but usually is a rectangular shape

(a) (b) (c)

Fig 7 (a) ndash digital dental X-ray image (b) - ROI mask for teeth with fillings (c) product of (a) and (b)

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 103: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

In practice most images are displayed using 8 bits the image values are in the range [0255] TIFF JPEG images ndash conversion to this range is automatic The conversion depends on the system used Difference of two images can produce image with values in the range [-255255] Addition of two images ndash range [0510] Many software packages simply set the negative values to 0 and set to 255 all values greater than 255 A more appropriate procedure compute

min( )mf f f

0 ( 255)max( )

ms

m

ff K K K

f

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 104: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

Spatial Operations

- are performed directly on the pixels of a given image

There are three categories of spatial operations

single-pixel operations

neighborhood operations

geometric spatial transformations

Single-pixel operations

- change the values of intensity for the individual pixels ( )s T z

where z is the intensity of a pixel in the original image and s is the intensity of the

corresponding pixel in the processed image

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 105: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

Neighborhood operations

Let Sxy denote a set of coordinates of a neighborhood centered on an arbitrary point (x y)

in an image f Neighborhood processing generates new intensity level at point (x y)

based on the values of the intensities of the points in Sxy For example if Sxy is a

rectangular neighborhood of size m x n centered in (x y) we can assign the new value of

intensity by computing the average value of the pixels in Sxy

( )

1( ) ( )xyr c S

g x y f r cm n

The net effect is to perform local blurring in the original image This type of process is

used for example to eliminate small details and thus render ldquoblobsrdquo corresponding to the

largest region of an image

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 106: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

Geometric spatial transformations and image registration

- modify the spatial relationship between pixels in an image

- these transformations are often called rubber-sheet transformations (analogous to

printing an image on a sheet of rubber and then stretching the sheet according to a

predefined set of rules

A geometric transformation consists of 2 basic operations

1 a spatial transformation of coordinates

2 intensity interpolation that assign intensity values to the spatial transformed

pixels

The coordinate system transformation ( ) [( )]x y T v w

(v w) ndash pixel coordinates in the original image

(x y) ndash pixel coordinates in the transformed image

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 107: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

[( )] ( )2 2v wT v w shrinks the original image half its size in both spatial directions

Affine transform

11 1211 21 31

21 2212 22 33

31 32

0[ 1] [ 1] [ 1] 0

1

t tx t v t w t

x y v w T v w t ty t v t w t

t t

(AT)

This transform can scale rotate translate or shear a set of coordinate points depending

on the elements of the matrix T If we want to resize an image rotate it and move the

result to some location we simply form a 3x3 matrix equal to the matrix product of the

scaling rotation and translation matrices from Table 1

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 108: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

Affine transformations

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 109: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

The preceding transformations relocate pixels on an image to new locations To complete

the process we have to assign intensity values to those locations This task is done by

using intensity interpolation (like nearest neighbor bilinear bi-cubic interpolation)

In practice we can use equation (AT) in two basic ways

forward mapping scan the pixels of the input image (v w) compute the new spatial

location (x y) of the corresponding pixel in the new image using (AT) directly

Problems

- intensity assignment when 2 or more pixels in the original image are transformed to

the same location in the output image

- some output locations have no correspondent in the original image (no intensity

assignment)

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 110: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

inverse mapping scans the output pixel locations and at each location (x y)

computes the corresponding location in the input image (v w) 1( ) ( )v w T x y

It then interpolates among the nearest input pixels to determine the intensity of the output

pixel value

Inverse mappings are more efficient to implement than forward mappings and are used in

numerous commercial implementations of spatial transformations (MATLAB for ex)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 111: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 112: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

Image registration ndash align two or more images of the same scene

In image registration we have available the input and output images but the specific

transformation that produced the output image from the input is generally unknown

The problem is to estimate the transformation function and then use it to register the two

images

- it may be of interest to align (register) two or more image taken at approximately the

same time but using different imaging systems (MRI scanner and a PET scanner)

- align images of a given location taken by the same instrument at different moments

of time (satellite images)

Solving the problem using tie points (also called control points) which are

corresponding points whose locations are known precisely in the input and reference

image

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 113: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

How to select tie points

- interactively selecting them

- use of algorithms that try to detect these points

- some imaging systems have physical artifacts (small metallic objects) embedded in

the imaging sensors These objects produce a set of known points (called reseau

marks) directly on all images captured by the system which can be used as guides

for establishing tie points

The problem of estimating the transformation is one of modeling Suppose we have a set

of 4 tie points both on the input image and the reference image A simple model based on

a bilinear approximation is given by

1 2 3 4

5 6 7 8

x c v c w c v w cy c v c w c v w c

(v w) and (x y) are the coordinates of the tie points (we get a 8x8 linear system for ci )

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 114: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

When 4 tie points are insufficient to obtain satisfactory registration an approach used

frequently is to select a larger number of tie points and using this new set of tie points

subdivide the image in rectangular regions marked by groups of 4 tie points On the

subregions marked by 4 tie points we applied the transformation model described above

The number of tie points and the sophistication of the model required to solve the register

problem depend on the severity of the geometrical distortion

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 115: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

a b c d (a) ndash reference image (b) ndash geometrically distorted image (c) - registered image (d) ndash difference between (a) and (c)

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 116: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

Probabilistic Methods

zi = the values of all possible intensities in an MtimesN digital image i=01hellipL-1

p(zk) = the probability that the intensity level zk occurs in the given image

( ) kk

np zM N

nk = the number of times that intensity zk occurs in the image (MN is the total number of

pixels in the image) 1

0( ) 1

L

kk

p z

The mean (average) intensity of an image is given by 1

0( )

L

k kk

m z p z

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 117: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

The variance of the intensities is 1

2 2

0( ) ( )

L

k kk

z m p z

The variance is a measure of the spread of the values of z about the mean so it is a

measure of image contrast Usually for measuring image contrast the standard deviation

( ) is used

The n-th moment of a random variable z about the mean is defined as 1

0( ) ( ) ( )

Ln

n k kk

z z m p z

( 20 1 2( ) 1 ( ) 0 ( )z z z )

3( ) 0z the intensities are biased to values higher than the mean

( 3( ) 0z the intensities are biased to values lower than the mean

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 118: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

3( ) 0z the intensities are distributed approximately equally on both side of the

mean

Fig1 (a) Low contrast (b) medium contrast (c) high contrast

Figure 1(a) ndash standard deviation 143 (variance = 2045)

Figure 1(b) ndash standard deviation 316 (variance = 9986)

Figure 1(c) ndash standard deviation 492 (variance = 24206)

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 119: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

Intensity Transformations and Spatial Filtering

( ) ( )g x y T f x y

f(x y) ndash input image g(x y) ndash output image T ndash an operator on f defined over a

neighborhood of (x y)

- the neighborhood of the point (x y) Sxy usually is rectangular centered on (x y)

and much smaller in size than the image

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 120: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

- spatial filtering the operator T (the neighborhood and the operation applied on it) is

called spatial filter (spatial mask kernel template or window)

( )xyS x y T becomes an intensity (gray-level or mapping) transformation function

( )s T r

s and r are denoting respectively the intensity of g and f at (x y)

Figure 2 left - T produces an output image of higher contrast than the original by

darkening the intensity levels below k and brightening the levels above k ndash this technique

is called contrast stretching

Fig 2 Intensity transformation functions left - contrast stretching right - thresholding function

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 121: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

Figure 2 right - T produces a binary output image A mapping of this form is called

thresholding function

Some Basic Intensity Transformation Functions

Image Negatives

The negative of an image with intensity levels in [0 L-1] is obtain using the function ( ) 1s T r L r

- equivalent of a photographic negative

- technique suited for enhancing white or gray detail embedded in dark regions of an

image

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 122: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

Original Negative image

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 123: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

Log Transformations - constant ( ) log(1 ) 0s T r c r c r

Some basic intensity transformation functions

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 124: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

This transformation maps a narrow range of low intensity values in the input into a wider

range An operator of this type is used to expand the values of dark pixels in an image

while compressing the higher-level values The opposite is true for the inverse log

transformation The log functions compress the dynamic range of images with large

variations in pixel values

Figure 4(a) ndash intensity values in the range 0 to 15 x 106

Figure 4(b) = log transformation of Figure 4(a) with c=1 ndash range 0 to 62

a b (a) ndash Fourier spectrum (b) ndash log transformation applied to (a) c=1 Fig 4

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 125: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

Power-Law (Gamma) Transformations

- positive constants( ) ( ( ) )s T r c r c s c r

Plots of gamma transformation for different values of γ (c=1)

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 126: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

Power-law curves with 1 map a narrow range of dark input values into a wider range

of output values with the opposite being true for higher values of input values The

curves with 1 have the opposite effect of those generated with values of 1

1c - identity transformation

A variety of devices used for image capture printing and display respond according to a

power law The process used to correct these power-law response phenomena is called

gamma correction

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 127: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

a b c d (a) ndash aerial image (b) ndash (d) ndash results of applying gamma transformation with c=1 and γ=30 40 and 50 respectively

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 128: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

Piecewise-Linear Transformations Functions

Contrast stretching

- a process that expands the range of intensity levels in an image so it spans the full

intensity range of the recording tool or display device

a b c d Fig5

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 129: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

11

1

2 1 1 21 2

2 1 2 1

22

2

[0 ]

( ) ( )( ) [ ]( ) ( )

( 1 ) [ 1]( 1 )

s r r rrs r r s r rT r r r r

r r r rs L r r r L

L r

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 130: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

1 1 2 2r s r s identity transformation (no change)

1 2 1 2 0 1r r s s L thresholding function

Figure 5(b) shows an 8-bit image with low contrast

Figure 5(c) - contrast stretching obtained by setting the parameters 1 1 min 0r s r

2 2 max 1r s r L where rmin and rmax denote the minimum and maximum gray levels

in the image respectively Thus the transformation function stretched the levels linearly

from their original range to the full range [0 L-1]

Figure 5(d) - the thresholding function was used with 1 1 0r s m

2 2 1r s m L where m is the mean gray level in the image

The original image on which these results are based is a scanning electron microscope

image of pollen magnified approximately 700 times

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 131: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

Intensity-level slicing

- highlighting a specific range of intensities in an image

There are two approaches for intensity-level slicing

1 display in one value (white for example) all the values in the range of interest and in

another (say black) all other intensities (Figure 311 (a))

2 brighten (or darken) the desired range of intensities but leaves unchanged all other

intensities in the image (Figure 311 (b))

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 132: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

Figure 6 (left) ndash aortic angiogram near the kidney The purpose of intensity slicing is to

highlight the major blood vessels that appear brighter as a result of injecting a contrast

medium Figure 6(middle) shows the result of applying the first technique for a band near

the top of the scale of intensities This type of enhancement produces a binary image

Highlights intensity range [A B] and reduces all other intensities to a lower level

Highlights range [A B] and preserves all other intensities

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 133: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

which is useful for studying the shape of the flow of the contrast substance (to detect

blockageshellip)

In Figure 312(right) the second technique was used a band of intensities in the mid-gray

image around the mean intensity was set to black the other intensities remain unchanged

Fig 6 - Aortic angiogram and intensity sliced versions

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 134: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

Bit-plane slicing

For a 8-bit image f(x y) is a number in [0255] with 8-bit representation in base 2

This technique highlights the contribution made to the whole image appearances by each

of the bits An 8-bit image may be considered as being composed of eight 1-bit planes

(plane 1 ndash the lowest order bit plane 8 ndash the highest order bit)

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 135: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)
Page 136: Digital Image Processing - profs.info.uaic.roancai/DIP/curs/DIP w1 2017.pdf · •R.C. Gonzales, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, ... DIP –used in

Digital Image Processing

Week 1

The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1 The bit-slicing technique is useful for analyzing the relative importance of each bit in the image ndash helps in determining the proper number of bits to use when quantizing the image The technique is also useful for image compression

  • DIP 1 2017
  • DIP 02 (2017)