52
Edge detection

Edge detection. Edges are places in the image with strong intensity contrast. Since edges often occur at image locations representing object boundaries,

Embed Size (px)

Citation preview

Page 1: Edge detection. Edges are places in the image with strong intensity contrast. Since edges often occur at image locations representing object boundaries,

Edge detection

Page 2: Edge detection. Edges are places in the image with strong intensity contrast. Since edges often occur at image locations representing object boundaries,

Edge detection Edges are places in the image with strong intensity contrast. Since edges often occur at image locations representing object boundaries, edge detection is extensively used in image segmentation when we want to divide the image into areas corresponding to different objects. Representing an image by its edges has the further advantage that the amount of data is reduced significantly while retaining most of the image information.

Since edges consist of mainly high frequencies, we can, in theory, detect edges by applying a highpass frequency filter in the Fourier domain or by convolving the image with an appropriate kernel in the spatial domain. In practice, edge detection is performed in the spatial domain, because it is computationally less expensive and often yields better results.

Page 3: Edge detection. Edges are places in the image with strong intensity contrast. Since edges often occur at image locations representing object boundaries,

Typically, there are three steps to perform edge detection:

1.      Noise reduction, where we try to suppress as much noise as possible, without smoothing away the meaningful edges.

2.      Edge enhancement, where we apply some kind of filter that responds strongly at edges and weakly elsewhere, so that the edges may be identified as local maxima in the filter’s output . One suggestion is to use some kind of high pass filter.

3. Edge localization, where we decide which of the local maxima output by the filter are meaningful edges and which are caused by noise

Page 4: Edge detection. Edges are places in the image with strong intensity contrast. Since edges often occur at image locations representing object boundaries,

Derivative filters One image can be represented as a surface, with height corresponding to gray level value. Brightness function depends on two variables co-ordinates in the image plane ( gray level value z = f ( x,y))

As averaging of the pixels over a region is analogous to integration, differentiation can be expected to have the opposite effect and thus sharpen an image.

Edges are pixels where brightness function changes abruptly. We can describe changes of continuous functions using derivatives. Brightness function depends on two variables co-ordinates in the image plane and so operators describing edges are expressed using partial derivatives.

Page 5: Edge detection. Edges are places in the image with strong intensity contrast. Since edges often occur at image locations representing object boundaries,

Derivative filters –What is the smallest possible window we can choose?

The difference of the gray values of the two pixels is an estimate of the first derivative of the intensity function (image brightness function) with respect to the spatial variable along the direction of with we take the difference. This is because first derivatives are approximated by first difference in the discrete case.

) , ( ) , 1 (y x f y x f gx ),()1,( yxfyxfg y Calculating gx at each pixel position is equivalent to convolving the

image with a mask (filter) of the form -1 +1

in the x direction , and calculating gy is equivalent to convolving the

image with a filter -1

+1

In the y direction.

(1)

Page 6: Edge detection. Edges are places in the image with strong intensity contrast. Since edges often occur at image locations representing object boundaries,

Identifying the local maxima as candidate edge

pixels

),(*),( yxfhyxg xx

We also can express gradient calculation as a pair of convolution

),(* yxfhg yy

In the first output, produced by convolution with mask hx, any pixel that has an absolute value larger than values of its left and right neighbors is a candidate pixel to be a vertical edge pixel. In the second output , produced by the convolution with mask hy, any pixel that has an absolute value larger than the values of its top and bottom neighbors is a candidate pixel to be a horizontal edge pixel.

The process of identifying the local maxima as candidate edge pixels is called non-maxima suppression.

(2)

Page 7: Edge detection. Edges are places in the image with strong intensity contrast. Since edges often occur at image locations representing object boundaries,

The Prewitt operator

Consider the arrangement of the pixels 3 x 3 about the pixel (x,y)

),1(),1(),( yxfyxfyxg x )1,()1,(),( yxfyxfyxg y

where the kernels are

101

101

101

xh

111

000

111

yh

Clearly , the kernel hx is sensitive to changes in the x direction,

edges that run vertically or have a vertical component. Similarly, the kernel hy is sensitive to changes in the direction y , edges that

run horizontally or have a horizontal component.

(3)

Page 8: Edge detection. Edges are places in the image with strong intensity contrast. Since edges often occur at image locations representing object boundaries,

The derivatives of the image

Figure 1: 1st and 2nd derivative of an edge illustrated in one dimension.

Page 9: Edge detection. Edges are places in the image with strong intensity contrast. Since edges often occur at image locations representing object boundaries,

Calculating the derivatives of the image

 We can see that the position of the edge can be estimated with the maximum of the 1st derivative or with the zero-crossing of the 2nd derivative. Therefore we want to find a technique to calculate the derivative of a two-dimensional image. For a discrete one-dimensional function f(i), the first derivative can be approximated by

Calculating this formula is equivalent to convolving the function with [-1 1].

Similarly the 2nd derivative can be estimated by convolving f(i) with [1 -2 1].

Page 10: Edge detection. Edges are places in the image with strong intensity contrast. Since edges often occur at image locations representing object boundaries,

There are two common approaches to estimate the 1st derivative in a two-dimensional image, Prewitt compass edge detection and gradient edge detection.

Page 11: Edge detection. Edges are places in the image with strong intensity contrast. Since edges often occur at image locations representing object boundaries,

Gradient edge detectionGradient edge detection is the second and more widely used technique. Here, the image is convolved with only two kernels, one estimating the gradient in the x-direction, gx, the other the gradient in the y-direction, gy.

A changes of the image function are based on the estimation of gray level gradient at a pixel.

The gradient is the two-dimensional equivalent of the first derivative and is defined as the gradient vector.

y

x

g

ggyxf ),(

The two gradients ( in x and in y direction) computed at each pixel by Equation (1) or Eq. ( 3) can be regarded as the x and y components of a gradient vector ( 4)

(4)

Page 12: Edge detection. Edges are places in the image with strong intensity contrast. Since edges often occur at image locations representing object boundaries,

The magnitude of the gray level gradient

There are two important properties associated with the gradient: gradient magnitude and direction of the gradient

The outputs of the two convolution are squared, added and square rooted to produce the gradient magnitude

22yx ggg

The magnitude of the gradient vector is the length of the hypotenuse of the right triangle having sides gx an gy, and this reflects the strength of

the edge, or edge response, at any given .It is common practice, to approximate the gradient magnitude by absolute values:

yx ggg ),max( yx ggg

(5)

(6)

Page 13: Edge detection. Edges are places in the image with strong intensity contrast. Since edges often occur at image locations representing object boundaries,

The direction of the gray level gradientFrom vector analysis, the direction of the gradient

This vector is oriented along the direction of the maximum rate of increase of the gray level function f (x,y) .

x

y

g

g1tan (7)

Page 14: Edge detection. Edges are places in the image with strong intensity contrast. Since edges often occur at image locations representing object boundaries,

In the localization step , we must to identify the meaningful edges

Whichever operator is used to compute the gradient, the resulting vector contains information about how strong the edge is at that pixel and what its direction is. The edge magnitude is a real number. Any pixel having a gradient that exceeds a specified threshold value is said to be an edge pixel, and others are not.

),(0

),(1

),(int yxfif

yxfif

yxgpoedge

An alternative technique is to look for local maxima in the gradient image, thus producing one pixel wide edges. A more sophisticated technique is used by the Canny edge detector. It first applies a gradient edge detector to the image and then finds the edge pixels using non-maximal suppression and hysteresis tracking.

(8)

Page 15: Edge detection. Edges are places in the image with strong intensity contrast. Since edges often occur at image locations representing object boundaries,

The most common kernels used for the gradient edge detector are the Sobel, Roberts Cross and Prewitt operators.

Page 16: Edge detection. Edges are places in the image with strong intensity contrast. Since edges often occur at image locations representing object boundaries,

Sobel Edge Detector

The Sobel operator performs a 2-D spatial gradient measurement on an image and so emphasizes regions of high spatial frequency that correspond to edges. Typically it is used to find the approximate absolute gradient magnitude at each point in an input grayscale image.

In theory at least, the operator consists of a pair of 3×3 convolution kernels as shown in Figure . One kernel is simply the other rotated by 90°

101

202

101

xh

121

000

121

yh

This operator place an emphasis on pixels that are closer to the center of the mask. The Sobel operator is one of most commonly used edge detectors.

Page 17: Edge detection. Edges are places in the image with strong intensity contrast. Since edges often occur at image locations representing object boundaries,

Sobel Edge Detector

Often, this absolute magnitude is the only output the user sees --- the two components of the gradient are conveniently computed and added in a single pass over the input image using the pseudo-convolution operator shown in Figure .

(9)

Page 18: Edge detection. Edges are places in the image with strong intensity contrast. Since edges often occur at image locations representing object boundaries,

Sobel Edge Detector - Guidelines for Use

The Sobel operator is slower to compute than the Roberts Cross operator, but its larger convolution kernel smoothes the input image to a greater extent and so makes the operator less sensitive to noise. The operator also generally produces considerably higher output values for similar edges, compared with the Roberts Cross.

Page 19: Edge detection. Edges are places in the image with strong intensity contrast. Since edges often occur at image locations representing object boundaries,

Compare the results of applying the Sobel operator

with the equivalent Roberts Cross output

Sobel operator Roberts Cross output

Page 20: Edge detection. Edges are places in the image with strong intensity contrast. Since edges often occur at image locations representing object boundaries,

Applying the Sobel operator- Examples

All edges in the image have been detected and can be nicely separated from the background using a threshold of 150, as can be seen in the Figure

Page 21: Edge detection. Edges are places in the image with strong intensity contrast. Since edges often occur at image locations representing object boundaries,

Although the Sobel operator is not as sensitive to noise as the Roberts Cross operator, it still amplifies high frequencies. The follow image is the result of adding Gaussian noise with a standard deviation of 15 to the original image.

Applying the Sobel operator yields

and thresholding the result at a value of 150 produces

We can see that the noise has increased during the edge detection.

Page 22: Edge detection. Edges are places in the image with strong intensity contrast. Since edges often occur at image locations representing object boundaries,

There are three problems with thresholding of gradient magnitude approach :

• First, the boundaries between scene elements are not always sharp. ( See Figure 7.20 pp 167 from the text book)

• The noise in an image can sometimes produce gradients as high as , or even higher than, those resulting from meaningful edges. ( See Figure 7.21 pp 168 from the text book)

•With a simple threshold approach is that the local maximum in gray level gradient associated with an edge lies at the summit of a ridge. Thresholding detects a portion of this ridge, rather than the single point of maximum gradient. The ridge can be rather broad in the case of diffuse edges, resulting in a thick band of pixels in the edge map. ( See Figure 7.19 pp 166 from the text book)

Page 23: Edge detection. Edges are places in the image with strong intensity contrast. Since edges often occur at image locations representing object boundaries,

How we can choose the weights of a 3 x 3

mask for edge detection ? A11 A12 A13

A21 A22 A23

A31 A32 A33

If we are going to use one such mask to calculate gx and another to calculate gy . Such

masks must obey the following conditions :

1.      The mask with calculates gx must be produced from

the mask that calculates gy by rotation by 90.

2.      We do not want to give any extra weight to the left or the right neighbors of the central pixel, so we must have identical weights in the left and right columns.

A11 A12 A11

A21 A22 A21

A31 A32 A31

Page 24: Edge detection. Edges are places in the image with strong intensity contrast. Since edges often occur at image locations representing object boundaries,

How we can choose the weights of a 3 x 3 mask for edge detection ?

Let us say that we want to subtract the signal “in front” of the central pixel from the signal “behind” it, in order to find local differences , and we want these two subtracted signals to have equal weights. -A11 -A12 -A11

A21 A22 A21

A11 A12 A11

If the image is absolutely smooth we want to have zero response. So the sum of all the weights must be zero. Therefore, A22 = -2A21:

-A11 -A12 -A11

A21 -2A21 A21

A11 A12 A11

Page 25: Edge detection. Edges are places in the image with strong intensity contrast. Since edges often occur at image locations representing object boundaries,

How we can choose the weights of a 3 x 3 mask for edge detection ?

      In the case of a smooth signal, and as we differentiate in the direction of columns, we expect each column to produce 0 output. Therefore , A21=0

-A11 -A12 -A11

0 0 0

A11 A12 A11

We can divide these weighs throughout by A11 so that finally, this mask depends only on one parameter

-1 -K -1

0 0 0

1 K 1

If we choose K=2 we have the Sobel masks for differentiating an image along two directions

Page 26: Edge detection. Edges are places in the image with strong intensity contrast. Since edges often occur at image locations representing object boundaries,

Prewitt compass edge detection( don’t be confuse with Prewitt operator for differential gradient edge detection )

Compass Edge Detection is an alternative approach to the differential gradient edge detection (see the Roberts Cross and Sobel operators). The operation usually outputs two images, one estimating the local edge gradient magnitude and one estimating the edge orientation of the input image. When using compass edge detection the image is convolved with a set of (in general 8) convolution kernels, each of which is sensitive to edges in a different orientation. For each pixel the local edge gradient magnitude is estimated with the maximum response of all 8 kernels at this pixel location:

where Gi is the response of the kernel i at the particular pixel position and n is the number of convolution kernels. The local edge orientation is estimated with the orientation of the kernel that yields the maximum response.

(10)

Page 27: Edge detection. Edges are places in the image with strong intensity contrast. Since edges often occur at image locations representing object boundaries,

Prewitt compass edge detection

Various kernels can be used for this operation; for the following discussion we will use the Prewitt kernel. Two templates out of the set of 8 are shown in Figure :

Figure : Prewitt compass edge detecting templates sensitive to edges at 0° and 45°.

Page 28: Edge detection. Edges are places in the image with strong intensity contrast. Since edges often occur at image locations representing object boundaries,

Prewitt compass edge detection

The whole set of 8 kernels is produced by taking one of the kernels and rotating its coefficients circularly. Each of the resulting kernels is sensitive to an edge orientation ranging from 0° to 315° in steps of 45°, where 0° corresponds to a vertical edge.

The maximum response |G| for each pixel is the value of the corresponding pixel in the output magnitude image. The values for the output orientation image lie between 1 and 8, depending on which of the 8 kernels produced the maximum response.

This edge detection method is also called edge template matching, because a set of edge templates is matched to the image, each representing an edge in a certain orientation. The edge magnitude and orientation of a pixel is then determined by the template that matches the local area of the pixel the best.

Page 29: Edge detection. Edges are places in the image with strong intensity contrast. Since edges often occur at image locations representing object boundaries,

The compass edge detector is an appropriate way to estimate the magnitude and orientation of an edge. Although differential gradient edge detection needs a rather time-consuming calculation to estimate the orientation from the magnitudes in the x- and y-directions, the compass edge detection obtains the orientation directly from the kernel with the maximum response.

The compass operator is limited to (here) 8 possible orientations; however experience shows that most direct orientation estimates are not much more accurate.

On the other hand, the compass operator needs (here) 8 convolutions for each pixel, whereas the gradient operator needs only 2, one kernel being sensitive to edges in the vertical direction and one to the horizontal direction.

Compass edge detection

Page 30: Edge detection. Edges are places in the image with strong intensity contrast. Since edges often occur at image locations representing object boundaries,

The result for the edge magnitude image is very similar with both methods to estimate the 1st

derivative in a two-dimensional image If we apply the Prewitt Compass Operator to the input image we get two output images

The image shows the local edge magnitude for each pixel. We can't see much in this image, because the response of the Prewitt kernel is too small.

Applying histogram equalization to this image yields The result is similar to which was processed with the Sobel differential gradient edge detector and histogram equalized.

Page 31: Edge detection. Edges are places in the image with strong intensity contrast. Since edges often occur at image locations representing object boundaries,

If we apply the Prewitt Compass Operator to the input image we get two output images

First image is the graylevel orientation image that was contrast-stretched for a better display. That means that the image contains 8 graylevel values between 0 and 255, each of them corresponding to an edge orientation.

The orientation image as a color labeled image (containing 8 colors, each corresponding to one edge orientation) is shown in the second image

Page 32: Edge detection. Edges are places in the image with strong intensity contrast. Since edges often occur at image locations representing object boundaries,

Compass edge detection - Examples

Another image suitable for edge detection is

The corresponding output for the magnitude and orientation, respectively of the compass edge detector is :

This image contains little noise and most of the resulting edges correspond to boundaries of objects. Again, we can see that most of the roughly vertical books were assigned the same orientation label, although the orientation varies by some amount.

Page 33: Edge detection. Edges are places in the image with strong intensity contrast. Since edges often occur at image locations representing object boundaries,

The Laplacian Operator The Laplacian is a 2-D isotropic measure of the 2nd spatial derivative of an image. The Laplacian of an image highlights regions of rapid intensity change and is therefore often used for edge detection . These second order derivatives can be used for edge localization.

The Laplacian is often applied to an image that has first been smoothed with something approximating a Gaussian smoothing filter in order to reduce its sensitivity to noise, and hence the two variants will be described together here. The operator normally takes a single graylevel image as input and produces another graylevel image as output.

The Laplacian operator combines the second order derivatives as follows:

2

2

2

22 ),(),(

),(y

yxf

x

yxfyxf

(11)

Page 34: Edge detection. Edges are places in the image with strong intensity contrast. Since edges often occur at image locations representing object boundaries,

The Laplacian OperatorSince the input image is represented as a set of discrete pixels, we have to find a discrete convolution kernel that can approximate the second derivatives in the definition of the Laplacian. Three commonly used small kernels are shown in Figure :

Because these kernels are approximating a second derivative measurement on the image, they are very sensitive to noise. To counter this, the image is often Gaussian smoothed before applying the Laplacian filter. This pre-processing step reduces the high frequency noise components prior to the differentiation step.

Page 35: Edge detection. Edges are places in the image with strong intensity contrast. Since edges often occur at image locations representing object boundaries,

Laplacian of Gaussian

1.      The smoothing filer is a Gaussian.

2.      The enhancement step is the second derivative ( Laplacian in two dimensions)

3.      The detection criterion is the presence of a zero crossing in the second derivative with a corresponding large peak in the first derivative.

In this approach, an image should first be convolved with Gaussian filter

),(),,(),( 2 yxfyxGyxg (12)

The order of performing differentiation and convolution can be interchanged because of the linearity of the operators involved:

),(),(),( 2 yxfyxGyxg (13)

Page 36: Edge detection. Edges are places in the image with strong intensity contrast. Since edges often occur at image locations representing object boundaries,

Laplacian of Gaussian The 2-D LoG function centered on zero and with Gaussian standard deviation has the form:

Figure 2 : The 2-D Laplacian of Gaussian (LoG) function. The x and y axes are marked in standard deviations .

(14)

Page 37: Edge detection. Edges are places in the image with strong intensity contrast. Since edges often occur at image locations representing object boundaries,

Laplacian of Gaussian - KernelA discrete kernel that approximates this function (for a Gaussian = 1.4) is shown in Figure 3 :

Note that as the Gaussian is made increasingly narrow, the LoG kernel becomes the same as the simple Laplacian kernels shown in Figure 2. This is because smoothing with a very narrow Gaussian ( < 0.5 pixels) on a discrete grid has no effect. Hence on a discrete grid, the simple Laplacian can be seen as a limiting case of the LoG for narrow Gaussians.

Page 38: Edge detection. Edges are places in the image with strong intensity contrast. Since edges often occur at image locations representing object boundaries,

Laplacian of Gaussian _ Guidelines for Use

The LoG operator calculates the second spatial derivative of an image. This means that in areas where the image has a constant intensity (i.e. where the intensity gradient is zero), the LoG response will be zero. In the vicinity of a change in intensity, however, the LoG response will be positive on the darker side, and negative on the lighter side. This means that at a reasonably sharp edge between two regions of uniform but different intensities, the LoG response will be:

•zero at a long distance from the edge,

•positive just to one side of the edge,

•negative just to the other side of the edge,

•zero at some point in between, on the edge itself.

Page 39: Edge detection. Edges are places in the image with strong intensity contrast. Since edges often occur at image locations representing object boundaries,

Laplacian of Gaussian _ Guidelines for Use

Figure 4 Response of 1-D LoG filter to a step edge. The left hand graph shows a 1-D image, 200 pixels long, containing a step edge. The right hand graph shows the response of a 1-D LoG filter with Gaussian = 3 pixels.

Page 40: Edge detection. Edges are places in the image with strong intensity contrast. Since edges often occur at image locations representing object boundaries,

Laplacian of Gaussian - Example

The image is the effect of applying an LoG filter with Gaussian = 1.0, again using a 7×7 kernel.

Page 41: Edge detection. Edges are places in the image with strong intensity contrast. Since edges often occur at image locations representing object boundaries,

Laplacian of Gaussian (zero crossing detector) The starting point for the zero crossing detector is an image which has been filtered using the Laplacian of Gaussian filter.

However, zero crossings also occur at any place where the image intensity gradient starts increasing or starts decreasing, and this may happen at places that are not obviously edges. Often zero crossings are found in regions of very low gradient where the intensity gradient wobbles up and down around zero. Once the image has been LoG filtered, it only remains to detect the zero crossings. This can be done in several ways.

The simplest is to simply threshold the LoG output at zero, to produce a binary image where the boundaries between foreground and background regions represent the locations of zero crossing points. These boundaries can then be easily detected and marked in single pass, e.g. using some morphological operator. For instance, to locate all boundary points, we simply have to mark each foreground point that has at least one background neighbor.

Page 42: Edge detection. Edges are places in the image with strong intensity contrast. Since edges often occur at image locations representing object boundaries,

Laplacian of Gaussian (zero crossing detector)

The problem with this technique is that will tend to bias the location of the zero crossing edge to either the light side of the edge, or the dark side of the edge, depending on whether it is decided to look for the edges of foreground regions or for the edges of background regions.

A better technique is to consider points on both sides of the threshold boundary, and choose the one with the lowest absolute magnitude of the Laplacian, which will hopefully be closest to the zero crossing.

Since the zero crossings generally fall in between two pixels in the LoG filtered image, an alternative output representation is an image grid which is spatially shifted half a pixel across and half a pixel down, relative to the original image. Such a representation is known as a dual lattice. This does not actually localize the zero crossing any more accurately, of course.

A more accurate approach is to perform some kind of interpolation to estimate the position of the zero crossing to sub-pixel precision.

Page 43: Edge detection. Edges are places in the image with strong intensity contrast. Since edges often occur at image locations representing object boundaries,

Laplacian of Gaussian - ApplicationsThe behavior of the LoG zero crossing edge detector is largely governed by the standard deviation of the Gaussian used in the LoG filter. The higher this value is set, the more smaller features will be smoothed out of existence, and hence fewer zero crossings will be produced. Hence, this parameter can be set to remove unwanted detail or noise as desired. The idea that at different smoothing levels different sized features become prominent is referred to as `scale'.

This image contains detail at a number of different scales.

The image is the result of applying a LoG filter with Gaussian standard deviation 1.0.

Page 44: Edge detection. Edges are places in the image with strong intensity contrast. Since edges often occur at image locations representing object boundaries,

Laplacian of Gaussian - Applications

Note that in this and in the following LoG output images, the true output contains negative pixel values. For display purposes the graylevels have been offset so that displayed graylevel 128 corresponds to an actual value of zero, and rescaled to make the image variation clearer. Since we are only interested in zero crossings this rescaling is unimportant.

The image shows the zero crossings from this image. Note the large number of minor features detected, which are mostly due to noise or very faint detail. This smoothing corresponds to a fine `scale'.

Page 45: Edge detection. Edges are places in the image with strong intensity contrast. Since edges often occur at image locations representing object boundaries,

Marr (1982) has suggested that human visual systems use zero crossing detectors based on LoG filters at several different scales (Gaussian widths).

Page 46: Edge detection. Edges are places in the image with strong intensity contrast. Since edges often occur at image locations representing object boundaries,

Difference of Gaussians ( DoG) OperatorIt is possible to approximate the LoG filter with a filter that is just the difference of two differently sized Gaussians. Such a filter is known as a DoG filter (short for `Difference of Gaussians').

A) For retinal ganglion cells and LGN neurons, the RF has a roughly circular , center-surround organisation. Two configurations are observed: one in which the RF center is responsive to bright stimuli ( ON-center) and the surround is responsive to dark stimuli, and the other (OFF-center) in which the respective polarities are reversed.

Page 47: Edge detection. Edges are places in the image with strong intensity contrast. Since edges often occur at image locations representing object boundaries,

Difference of Gaussians ( DoG) Operator

The common feature of the two types of receptive fields is that the centre and surround regions are antagonistic. The DoG operator uses simple linear differences to model this centre-surround mechanism and the response R(x,t) is analogous to a measure of contrast of events happening between centre and surround regions of the operator. (see Figure 5)

Page 48: Edge detection. Edges are places in the image with strong intensity contrast. Since edges often occur at image locations representing object boundaries,

Difference of Gaussians ( DoG) OperatorThe difference of Gaussian ( DoG) model is composed of the difference of two response functions that model the centre and surround mechanisms of retinal cells. Mathematically, the DoG operator is defined as

);();()( sscc GGDoG xxx (15)

where G is a two-dimensional Gaussian operator at x:

22

2||

22

1;

x

x

eG

Parameters c and s are standard deviations for centre and surround

Gaussian functions respectively. This Gaussian functions are weighted by integrated sensitivities c (for centre) and s (for the surround). The

response R(x,t), of the DoG filter to an input signal s(x,t) at x during time t is given by:

dxdytsDoGtR ,, wxwxw

(16)

Page 49: Edge detection. Edges are places in the image with strong intensity contrast. Since edges often occur at image locations representing object boundaries,

“On” events are extracted in parallel in several multiscale maps in temporal window of size T=2

Normalization

Competition among “on” maps

Time t Time t+1

Competition among “off” maps

Multi-agent reinforcement system

Location and scale of the most significant “on” events

Location and scale of the most significant “off” events

Evaluative feedback

“Off” events are extracted in parallel in several multiscale maps in temporal window of size T=2

Extending the DoG operator to detect changes in a sequence of images

Nc(x,t) – Nc(x,t-1)>=1

(for “on” center events)

Scale S represents a pair of values for c and s

Page 50: Edge detection. Edges are places in the image with strong intensity contrast. Since edges often occur at image locations representing object boundaries,

Parametric edge models Parametric models are based on the idea that the discrete image intensity function can be considered a sampled and noisy approximation of the underlying continuous or piecewise continuous image intensity function . Therefore we try to model the image as a simple piecewise analytical function. Now the task becomes the reconstruction of the individual piecewise function. We try to find simple functions which best approximate the intensity values only in the local neighborhood of each pixel. This approximation is called the facet model ( Haralick and Shapiro,92) These functions, and not the pixel values , are used to locate edges in the image .

To provide an edge detection example, consider a bi-cubic polynomial facet model:

310

29

28

37

265

24321),( ykxykyxkxkykxykxkykxkkyxf

Page 51: Edge detection. Edges are places in the image with strong intensity contrast. Since edges often occur at image locations representing object boundaries,

Parametric edge modelsThe first derivative in the direction is given by

sincos),('y

f

x

fyxf

The second directional derivative in the direction is given by

2

2

222

2

2'' sinsincos2cos),(

y

f

yx

f

x

fyxf

322

2

2sinkk

k

32

22

3coskk

k

We are considering points only on the line in the direction ,

x0 =cos and y0 =sin

BAkkk

kkkkf

6)coscossinsin(2

)coscossincossinsin(6

265

24

310

29

28

37

''

Page 52: Edge detection. Edges are places in the image with strong intensity contrast. Since edges often occur at image locations representing object boundaries,

Parametric edge models

There are many possibilities:

A=0, f’=2Bp +C, f’’ = 2B, pl=-C/B, ps=0

1. if B>0 and f’’>0 valley

2. if B<0 and f’’< 0 ridge

3. if B=0 and f”” =0 plane