View
8
Download
0
Category
Preview:
Citation preview
152
CHAPTER 6
DETECTION OF DIABETIC RETINOPATHY USING ANFIS
6.1 INTRODUCTION
Diabetic retinopathy (DR), a complication of diabetes, is one of the
most significant factors contributing to blindness and so early diagnosis and
timely treatment is particularly important to prevent visual loss. However, if
the symptoms are identified earlier and a proper treatment is provided through
regular screenings, blindness can be avoided. In order to lessen the cost of
these screenings, modern image processing techniques are used to voluntarily
detect the existence of abnormalities in the retinal images. Earliest signs of
diabetic retinopathy are damage to blood vessels in the eye and then the
formation of lesions in the retina. Automatic detection of lesions in retinal
images can assist in early diagnosis and screening of diabetic retinopathy.
Two approaches namely pixel based Color Histogram (CH) technique and
image based anatomical and textural feature extraction are proposed to detect
the presence of exudates, an early occurring lesion in color fundus image.
Extracted features are then fed to ANFIS to classify the images into stages of
DR.
This chapter is organized as follows: Section 6.2 describes the
identification of two anatomical structures namely blood vessels and macula.
Section 6.3 explains the proposed method to detect exudates using pixel based
approach and image based approach. Quantitative analysis of pixel based
approach using color histogram technique is presented in section 6.4 and
153
performance analysis of the technique is presented in section 6.5. Section 6.6
describes the feature set used for classification of true bright lesions from
bright non-lesions and section 6.7 describes the performance results of the
classifier. GUI developed for the detection of exudates is explained in section
6.8 and the conclusions are provided in section 6.9.
6.2 DETECTION OF ANATOMICAL STRUCTURES
Detection of the anatomic structures is fundamental to the
subsequent characterization of the normal or disease state that may exist in the
retina. Fundus image analysis system is developed to extract the landmark
features such as retinal blood vessels, macula, OD before identifying
pathological entities such as hard exudates.
6.2.1 Blood Vessel Extraction
Blood vessel in retinal images is a key indicator for diagnosis of
diseases like diabetic retinopathy, hypertension and various vascular
disorders. Reliable methods for segmentation of blood vessels in fundus
images are needed since pathologies may be interpreted as vessels. In order to
solve these problems, multi scale techniques described by Palomera et al
(2010) were used to isolate information about objects by considering
geometrical features at different scales. Blood vessels are extracted using a
combination of eigen values of the image with a multiscale approach. Hessian
matrix describes the second order local image intensity variations around the
selected volumetric pixel (voxel). The partial derivatives are calculated as
voxel intensity differences in the neighborhood of the voxel. Hessian matrix is
computed for each pixel in the image by convolving the original fundus image
R(x, y) with a second derivative of a gaussian kernelG (x, y) with a scale s as
shown in Equation (6.1). From the obtained Hessian matrix, its eigen values
and eigen vectors are calculated.
154
L(x, y) = R(x, y) G(x, y; s) (6.1)
whereG (x, y; s) =1
e
L(x, y)istheintensityimage , denotes convolution, and is the
variance. Gradient magnitude of the image intensity is found using the
Equation (6.2) and calculated at different scales since blood vessels appear in
different sizes.
| L| = L + L (6.2)
whereL = I(x, y) sG and = I(x, y) sG
Gx, Gy are the gaussian derivatives in the x and y direction. Lx , Ly
are the first derivative of the magnitude in x and y direction. Local maxima of
the gradient magnitude is calculated using the Equation (6.3).
= max | | ) (6.3)
Large eigen value and small eigen value of the intensity
image are calculated using the Equation (6.4).
=L + L
2 and =L + L
2(6.4)
where = L L + 4L
L , L are the second derivatives of the intensity image in the x
and in the y direction. Local maximum of is calculated using the Equation
(6.5).
= max(s)
s (6.5)
L
155
Table 6.1 summarizes the relations between and orientation of a
structure in the image.
Table 6.1 Eigen values of the Hessian matrix and image structure
orientation
1 2 3 Structure OrientationL L L Noise (No Preferred Structure)L L H– Bright Sheet Like StructureL L H+ Dark Sheet Like StructureL H- H- Bright tubular structureL H+ H+ Dark tubular structure
H+ H+ H+ Dark blob-like structureH- H- H- Bright blob-like structure
(H= high, L=low, N= noisy, +/- indicate the sign of the eigen value)
Eigen values play an important role in the discrimination of local
orientation pattern. Eigen vector decomposition extracts an orthonormal
coordinate system that is aligned with the second order structure of the image.
Vessel structures are considered to be a tubular structure. With the resulting
theoretical behavior of the eigen values and knowing the model of the
structure to be detected, the decision can be made, if the analyzed voxel
belongs to the structure being searched. By thresholding the image formed by
the smallest eigen value a complete vessel structure is obtained.
6.2.2 Detection of Macula
Macula is the area of acute vision within the retina. Region of
interest (ROI) is taken over the optic disc boundary. The optic disc boundary
is traced and the diameter of the OD is calculated. Optic disc diameter is
calculated to find macular region since macula lies at a distance of twice the
optic disc diameter. First the macula region roughly estimated using the OD
diameter is shown in Figure 6.1.
156
Figure 6.1 Distance between macula and optic disc
To get the desired feature in the image and to remove the uneven
background illumination, top hat filtering technique is used with a disc shaped
structuring element of radius 13.Top-hat filtering computes the morphological
opening of the image and then subtracts the result from the original image as
shown in Figure 6.2. Resulting image is then subtracted from the original
image as in Figure 6.3.
Figure 6.2 Filtered image
Figure 6.3 Subtracted Image Figure 6.4 Detected macula
Macula region appears black due to lower intensity and inverse
binary is performed on the detected image to identify macula as shown in
Figure 6.4.
157
6.3 DETECTION OF DIABETIC RETINOPATHY
Color fundus image
Figure 6.5 Flow diagram to detect diabetic retinopathy
Color space conversion ( RGB to L*a*b* )
Fundus region detectionand mask creation
Local contrastenhancement
Color histogramthresholding Feature extraction
Classification usingANFIS
Exudatespresent
Normal
DiabeticRetinopathy
Blood vessel Area
Detection ofcandidate pixels
Performanceevaluation
Optic disc elimination(Pixel based )
(Image based)
Yes
No
158
Flow diagram of the proposed fundus image analysis system is
shown in Figure 6.5. Fundus retinal images of RGB color space shown in
Figure 6.6(a) is transformed into L*a*b* color space shown in Figure 6.6(b).
(a) Color fundus image (b) L*a*b*color space
Figure 6.6 Input DR image and color space conversion
6.3.1 Fundus Region Detection
A retinal color fundus image comprises of a circular fundus and a
dark background neighboring the fundus. It is important to detach the fundus
from its background so that the further processing is only carried out for the
fundus and not hindered by pixels belonging to the background. In this sub-
section, a method for creating a binary fundus mask prior to lesion detection
is described. For fundus region detection, initially binarization process is
employed over the fundus color image to convert the L*a*b* color space
image into binary image. It converts the input image to grayscale and then
converts this grayscale image to binary by thresholding. The output image
replaces all pixels in the input image with luminance greater than the value 1
as white and replaces all other pixels with the value 0 as black. Followed by
that, the filter will morphologically close the binary image. Morphological
closing of a binary image is defined as the dilation of the image followed by
the erosion of the dilated image. The closing filter operation smoothens the
159
boundaries, reduce small inward bumps, join narrow breaks and fill small
holes caused by noise. In a fundus mask shown in Figure 6.7, pixels
belonging to the fundus are marked with 1’s and the background of the fundus
with 0’s.
Figure 6.7 Fundus mask
With the help of the fundus mask an exudates detection algorithm
can process only the pixels of the fundus and omit the background pixels as
shown in Figure 6.8.
Figure 6.8 Fundus Mask area
Before starting the search of the abnormal lesions from an acquired
fundus image, the image has to be pre-processed to ensure adequate level of
success in the abnormality detection. There is a wide variation in the color of
160
the fundus image due to race, iris color and the contrast of retinal images is
not sufficient due to attributes of lesions and decreasing color saturation. The
intensity of a digital image be indexed by (i, j). A small running window W of
size U × U was centered on (i, j) .The objective of this technique is to define a
point transformation dependent on the window such that the distribution is
localized around the mean of the intensity and covers the entire intensity
range. Adaptive Histogram Equalization (AHE) technique, a local contrast
enhancement method developed by (Sinthanayothin 1999) is applied to the
intensity image as in Equation (6.6) to improve both the contrast of bright
lesions and the overall color saturation of the retinal image.
f(i, j) = 255 [ ( ) )]( ) )
(6.6)
where (f) = 1 + exp
fmax and fmin are the maximum and minimum intensity values
within the whole eye image. µW indicates the local window mean and W
indicates standard deviation of the intensity within W.
As a result of this adaptive histogram equalization, the dark area in
that eye image that was badly illuminated has become brighter in the output
eye image while the side that was highly illuminated remains or reduces so
that the whole illumination of the eye image is same. Enhanced image
suppresses the background features and enhances vessel visibility.
6.3.2 Nonlinear Diffusion Segmentation
In this step, the segmentation of lesions is modeled in a framework
that encapsulates the variation in exudates and lesion boundary criteria. The
goal is to localize the lesion boundaries for which nonlinear diffusion
161
segmentation is used. The basis of nonlinear diffusion segmentation is the
identification of similar pixels within a region to determine the location of a
boundary. Nonlinear diffusion method proposed by Perona and Malik (1990)
is employed for averting the blurring and localization issues of linear
diffusion filtering. The nonlinear diffusion method applies a non-uniform
process that lowers the diffusivity of those locations with a larger likelihood
to be edges. This likelihood can be calculatedby | | as in Equation (6.7).
u = div(g(| | ) ) (6.7)
u refers to the image, div is the divergence operator, is the
gradient operator. The amount of smoothing can be modulated at each
location by the present magnitude of the gradient g, using the
Equation (6.8).
sm=| | refers to the size of the image gradient.
g(s ) =1
1 + s e(e > 0) (6.8)
The diffusivity function g: s [ 0, 1] is a decreasing function of
either the size of the image gradient or the smoothed gradient. The function
g(s ) detects the presence of an edge at a particular position. If s is small,
there is a minor probability of an edge at that position, and g is close to 1; if,
s is large, the location is likely to belong to an edge, and the value of g will
be close to zero.e is an edge threshold parameter. Segmented image is shown
in Figure 6.9(a) and the region with similar intensity in Figure 6.9(b).
162
(a) Segmented image (b) Similar intensity region
Figure 6.9 Region marked with similar intensity
6.3.3 Localization of Optic Disc
Optic disc has to be identified prior to the process of exudates
detection since it appears with similar intensity, color and contrast to other
features on the retinal image. Localization of an optic disc is a vital step in the
automated retinal image screening system. The optic disc is exemplified by
the largest high contrast among circular shape areas. It is noticed to be in oval
shape with an average diameter of 1.5 to 1.7 mm and approximately 3 mm
nasal to the fovea. For this, again the segmented fundus image is converted
into binary image. Regions with high intensity value (exudates and optic disc)
are grouped into white (1 pixel) and the other into regions as black (0 pixel).
Subsequently, to locate the optic disc in the color fundus image, color
histogram equalization technique is then applied independently for each
extracted regions.
Color Histogram (CH) is widely used as an important color feature
indicating the content of the image, due to its robustness to scaling,
orientation, perspective, and occlusion of images. CH is based on the intensity
of the three channels and represents the number of pixels that have colors in
each of a fixed list of color ranges. Given a color space containing B color
bins, the color histogram of a color image with n pixels is represented as
163
a vector H = [h0, h1, h2,…..hB-1] in which each entry hi indicates the statistical
figures of the colors in the color image which belong to the ith bin as shown in
Equation (6.9).
h , i = 0, 1….B - 1 (6.9)
where ni is the number of pixels with colors in the ith color bin. Clearly, the
more bins a color histogram contains the more discrimination power it has.
However, a histogram with large number of bins will not only increase the
computational cost, but will also be inappropriate for building efficient
indexes for image data base. The maximum pixel value of color histogram
localized as optic disc is shown in Figure 6.10. In each of the three color
channels 5 color bins are used, resulting in a total of 125 bins. In this method,
to bin the color triplets, each (L*a*b*) triplet is truncated as (L*a*b*) where
each value can only be a multiple of 25 up to a maximum of 255.Triplets are
normalized by the sum of their values. Color difference is then calculated
using the euclidean distance between two color triplets.
Figure 6.10 Optic disc localization
6.3.4 Detection of Soft and Hard Exudates using Color Histogram
(Pixel based approach)
Input: Color fundus image
Output: Segmentation results by color histogram thresholding.
164
Once the optic disc is localized from the color fundus image, the
exudates are detected based on the color histogram thresholding. In this
method, spatial information about the colors is incorporated by dividing the
masked image into blocks.
1. Color fundus image is divided into number of non-overlapping
blocks.
2. Mask image MI is created for the original image. Subsequently
MI is split into blocks with block size v x v.
3. Color histogram is calculated for each block of the image.
4. By the use of threshold value based on color histogram, soft
exudates are detected over the color fundus image. The
threshold is chosen in a very tolerant manner to differentiate
between the hard and soft exudates region in a color fundus
image. Finally, based on the chosen threshold value, the soft
and hard exudates are detected from the color fundus retinal
image. Soft exudates pixels shown in Figure 6.11 are detected
when the threshold value is greater than 0.8 and less than 0.85.
Figure 6.11 Detection of soft and hard exudates with optic disc masked
Input images with different DR severity stages are shown in Figure
6.12 and the corresponding detected exudates in Figure 6.13.
165
Figure 6.12 DR images
Figure 6.13 Detected exudates
Detected pixels are compared with the ground truth where each
exudates pixels are marked by an expert and it is shown for two images in
Figure 6.14. Number of exudates pixels correctly detected or missed are
identified and from which sensitivity and specificity are calculated. The
method works better than computing histogram over the entire image.
(a) Input image (b) Segmented blood vessels
(c) Detected exudates (d) Ground truth pixels
Figure 6.14 Comparison of detected and ground truth pixels
166
6.4 QUANTITATIVE ANALYSIS OF EXUDATES DETECTION
Quantitative results generated from CH technique are used to
diagnose or evaluate the progress of the illness. Pixels detected using the
algorithm are compared with the ophthalmologist’s hand drawn ground truth.
To evaluate the performance, sensitivity, specificity and accuracy are
calculated on a per-pixel basis.
Table 6.2 Quantitative analysis of pixel based approach for exudates
detection
Images
Groundtruthpixels
Detectedexudates TP FP FN TN
Sensitivity(%)
Specificity(%)
Accuracy(%)
1 8431 11428 7997 1052 80 34492 99.01 97.88 98.092 2022 2350 1985 472 18 26148 99.1 99.24 99.233 840 1269 1793 457 20 29498 99.01 99.47 99.444 325 964 916 648 9 29477 99.03 99.74 99.715 1392 2003 3499 592 32 29388 98.87 99.35 99.316 1533 2456 4783 423 43 28342 99.11 98.53 98.617 2043 3243 5686 297 48 29372 99.16 98.3 98.448 631 1283 2892 337 28 29498 99.04 99.2 99.199 1613 2870 3953 350 36 28351 99.1 98.78 98.8210 529 1473 1519 453 13 29496 99.15 98.87 98.8811 507 1369 2458 528 23 29517 99.07 99.23 99.2212 326 1366 792 496 5 30513 99.37 99.71 99.713 1295 2066 1254 384 12 28505 99.32 99.25 99.2614 713 1599 978 435 8 28436 99.19 99.63 99.6215 9127 11957 9012 503 88 28184 99.03 99.28 97.716 1209 1764 1878 586 16 29360 99.16 99.37 99.3617 1189 2075 1362 438 11 27265 99.2 99.57 99.5518 674 1370 1256 352 10 26184 99.21 99.61 99.5919 375 817 372 445 3 29147 99.2 99.88 99.8720 3017 5185 2973 534 30 27498 99 99.16 99.1421 15296 18570 9453 566 94 28437 99.02 97.04 97.5322 3878 5108 3654 478 35 26995 99.05 98.62 98.6723 2571 4109 2953 522 29 27444 99.03 99.2 99.18
167
Pixel-based evaluation considers four values, namely true
positive (TP), a number of exudates pixels correctly detected, false positive
(FP), a number of non-exudates pixels which are detected wrongly as exudate
pixels, false negative (FN), a number of exudate pixels that were not detected
and true negative (TN), a number of non-exudates pixels which were correctly
identified as non-exudate pixels. Table 6.2 shows the quantitative result of
TP, FP, FN, TN, sensitivity, specificity and accuracy from the images of
diseased eyes.
6.5 PERFORMANCE ANALYSIS OF EXUDATES DETECTION
The performance of exudates detection using color histogram
technique was evaluated quantitatively by comparing the detected pixels with
ophthalmologist’s hand-drawn ground truth images pixel by pixel. From these
quantities, the sensitivity, specificity and accuracy were computed using the
Equations (6.10), (6.11) and (6.12) respectively. Calculations for the fifth
image in Table 6.2 are shown below.
Sensitivity = TP / (TP + FN) (6.10)
= 3499/ (3499+32)
= 98.87%
Specificity = TN / (TN + FP) (6.11)
= 29388 / (29388+592)
= 99.35%
Accuracy = (TP + TN) / (TP + FP + FN + TN) (6.12)
= (3499+24388) / (3499+592+32+29388)
= 99.31 %
168
Color histogram technique when evaluated on 200 real time images
provided an average sensitivity of 99.11%, specificity of 98.32% and an
accuracy of 99.1%.
6.6 IMAGE BASED APPROACH USING ANATOMICAL ANDTEXTURE FEATURES
The segmentation of bright lesions results in a set of candidate
bright lesion objects. The aim of the candidate bright lesion classification
system is to classify the detected objects as either bright lesion or bright non-
lesions. The bright non-lesions false positives are due to the influence of
cluster overlapping and non-uniformity of gray level. These false positives are
also due to the presence of regions having high background brightness. In
order to remove such candidates, classifiers are used which are trained with
the features derived from the candidates.
6.6.1 Feature Extraction
Feature extraction stage refers to pixel characterization by means
of a feature vector and it is a pixel representation in terms of some
quantifiable measurements which may be used in the classification stage to
decide whether pixels belong to a real exudates or not. In order to classify the
segmented regions into exudates and nonexudates, the images must be
represented with relevant and significant features to provide best class
separability.
Texture analysis is used to extract the features of the retina and it is
defined by a set of statistics extracted from the segmented region. Suitable
feature set is extracted from the enhanced retinal images and from the
detected anatomical structures. These anatomical structures include macula,
blood vessels and the optic disc. Direct segmentation methods segmenting DR
are more complex because the texture of unhealthy areas of retina is quite
169
irregular. Texture is examined only on the segmented image without
including the background of the image. Features extracted from the
segmented regions for various categories namely normal, mild, moderate and
severe stages are shown in Table 6.3.
Table: 6.3 Feature extraction from real time images
ImagesBlood Vesselarea (value
in pixels)
Area ofcandidateexudatesin pixels
Contrast Correlation Energy Homogeneity Entropy
1 13848 914 1.3876 0.8156 0.8093 0.9739 6.7445
2 15513 626 1.4798 0.8244 0.8165 0.9751 6.7415
3 14198 711 1.3914 0.8136 0.8079 0.9785 7.0132
4 13767 856 1.4321 0.8132 0.8298 0.9718 6.7967
5 17456 813 1.4825 0.8197 0.8053 0.9757 6.7507
6 17866 1057 1.4529 0.8138 0.8067 0.9712 6.7625
7 29259 557 1.2824 0.8367 0.8199 0.9778 6.6593
8 29543 296 1.2537 0.8385 0.8037 0.9759 6.6474
9 30454 456 1.1738 0.8352 0.8174 0.9731 6.5733
10 29623 256 1.2681 0.8319 0.7901 0.9725 6.3801
11 31409 437 1.1036 0.8484 0.8133 0.9714 6.4041
12 33757 355 1.0772 0.8476 0.8141 0.9799 6.6659
13 27212 49 0.6321 0.9324 0.7414 0.9851 5.4236
14 22784 75 0.6021 0.9387 0.7389 0.9804 6.5452
15 24986 58 0.8752 0.9344 0.7456 0.9805 6.4832
16 25437 147 0.6454 0.9351 0.7361 0.9885 6.4734
17 23531 64 0.7252 0.9312 0.7441 0.9807 6.4251
18 24985 72 0.7577 0.9442 0.7412 0.9835 6.4824
19 26660 0 0.3328 0.9634 0.7228 0.9914 5.4236
20 28357 7 0.3334 0.9732 0.7278 0.9927 6.5452
21 27243 4 0.4555 0.9856 0.7264 0.9972 6.4832
22 27321 5 0.4342 0.9838 0.7218 0.9934 6.4734
23 29659 7 0.2422 0.9746 0.7281 0.9949 6.4251
24 28467 2 0.3564 0.9659 0.7315 0.9897 6.4824
170
Features extracted related to first order are mean, standard
deviation, entropy, skewness and kurtosis. Cooccurence matrix captures the
spatial distributions of gray level and represents the occurrence rate of a pixel
pair with gray levels i and j. Contrast, correlation, energy, homogeneity and
entropy are the set of features extracted using GLCM. Area occupied by
blood vessels, area occupied by the candidate exudates are the features
extracted from anatomical structures. Features extracted are selected using
SFFS and seven significant features namely area of blood vessels, area
occupied by the candidate exudates, contrast, correlation, energy,
homogeneity and entropy are fed as input to ANFIS. First order features do
not show any discriminatory performance. The data is normalized and the
generated data contains normalized feature vector computed around each
pixel. The feature vector so generated from patterns is assigned to ANFIS forclassification of images.
Graphical representation of features for few images representing
various stages of DR is shown from Figure 6.15 to Figure 6.19.
Figure 6.15 Data distribution of contrast for few images
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1 2 3 4 5 6 7 8 9 10 11 12 13 14
Cont
rast
Images
severe
Moderate
Mild
normal
171
Figure 6.16 Data distribution of energy for few images
Figure 6.17 Data distribution of homogeneity for few images
0.66
0.68
0.7
0.72
0.74
0.76
0.78
0.8
0.82
0.84
1 2 3 4 5 6 7 8 9 10 11 12 13 14
Ener
gy
Images
severe
Moderate
Mild
normal
0.9550.96
0.9650.97
0.9750.98
0.9850.99
0.9951
1.005
1 2 3 4 5 6 7 8 9 10 11 12 13 14
Hom
ogen
eity
Images
severe
Moderate
Mild
normal
172
Figure 6.18 Data distribution of entropy for few images
Figure 6.19 Data distribution of correlation for few images
0
1
2
3
4
5
6
7
8
1 2 3 4 5 6 7 8 9 10 11 12 13 14
Entr
opy
Images
severe
Moderate
Mild
normal
0
0.2
0.4
0.6
0.8
1
1.2
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Corr
elat
ion
Images
severe
Moderate
Mild
normal
173
Figure 6.20 Data distribution of blood vessels for few images
Contrast, correlation, energy, homogeneity, entropy, area occupied
by the blood vessels and area occupied by the candidate exudates were the
features extracted from the fundus images. Contrast feature shown in Figure
6.15 have different values for normal, mild, moderate and severe stages of
diabetic retinopathy and hence gives a clear differentiation of the classes.
Contrast provides a low value for normal images and a value
greater than 1 for moderate and severe stages due to the leakage of blood
vessels in the retina. There is a overlapping of normal and mild stages using
energy feature. Energy feature shown in Figure 6.16 provides a less
discriminative effect in classifying the moderate and severe stages.
Homogeneity values in Figure 6.17 gives a clear differentiation between the
moderate and severe stages but cannot effectively differentiate the initial stage
of the disease from the normal stage. Entropy values in Figure 6.18 and
correlation values in Figure 6.19 ranks last in identifying the progress of the
disease.
0
5000
10000
15000
20000
25000
30000
35000
40000
1 2 3 4 5 6 7 8 9 10 11 12 13 14
Bloo
d ve
ssel
are
a(in
pix
els)
Images
severe
Moderate
Mild
normal
174
Figure 6.20 describes the blood vessel area occupied by the pixels
in various stages of the disease. In the mild stage of DR, blood vessel area
will be less when compared to normal images and there will be a few
exudates. Also the distance between macula and the bright lesions will be
greater than one disc diameter.
In moderate stage there will be more number of exudates with less
blood vessels compared to the mild stage and the distance between macula
and exudates is less than one disc diameter. In the severe stage of the disease,
exudate occupies a large area due to leakage of blood from the vessels and the
distance between exudates and macula is still less compared to moderate
stage. To analyze the severity of the disease, the polar coordinate system
shown in Figure 6.21 is overlaid on the input image to analyze the distribution
of exudates around the fovea. If the exudates are in fovea region, then it is a
severe stage of DR leading to complete blindness.
Figure 6.21 Polar coordinate system
175
6.7 ADAPTIVE NEURO-FUZZY INFERENCE SYSTEM AS A
CLASSIFIER
Neuro-fuzzy systems have the potential to capture the benefits of
both neural networks and fuzzy logic in a single framework. In ANFIS, the
membership functions are extracted from a data set that describes the system
behavior. ANFIS learns features in the dataset and adjusts the system
parameters according to a given criterion. Set of features extracted from first
and second order statistics were used as inputs to ANFIS and the performance
was evaluated.
Using ANFIS, 90 nodes with 40 linear and 70 nonlinear parameters
are formed with five fuzzy rules. Step size for parameter adaptation had an
initial value of 0.01. The system is loaded with the statistical features of the
fundus images along with the desired output from the workspace for training
the network. In this work, training and testing set were formed by 165 and
250 data. 80 normal and 85 abnormal images were used for training and 120
normal and 130 abnormal images were used for testing DR. Abnormal images
include 30 mild, 50 moderate and 50 severe DR images. Dataset used for the
classification is shown in Table 6.4. Schematic of the ANFIS structure
obtained for the proposed system is shown in Figure 6.22.
Table 6.4 Dataset used for classification of DR images
Images Training Data Testing Data No. of Images/Class
Normal 80 120 200
Abnormal 85 130 215
Total no of images 165 250 415
176
Figure 6.22 ANFIS structure for DR detection
ANFIS is initialized with 100 iterations and 0.01error tolerance.
Step size for parameter adaptation is 0.1. The leftmost node in Figure 6.22 is
the input node. Training data produced a fuzzy inference system which
contains five rules. Each input was given five membership functions and the
output was represented with two linear membership functions. The branches
in Figure 6.22 are color coded to indicate whether or not and, not or or are
used in the rules. If then rules generated from ANFIS is shown in Figure 6.23.
Blue colored nodes indicates AND operation.
177
178
Contingency table for detection of exudates is shown in Table 6.5.
Table 6.5 Contingency table for exudates detection
Exudates Present Exudates AbsentExudates detected True Positive (TP) False Positive (FP)
Exudates not detected False negative (FN) True negative (TN)
Dataset used for neural network based back propagation and
ANFIS are shown in Table 6.6. Seven features are used as inputs to ANFIS
classifier for detecting the presence or absence of exudates in the retinal
images. Here, the network is trained to identify two classes viz normal and
abnormal. Comparative analysis is performed between ANFIS and back
propagation neural network classifier. Classifiers are compared based on
correctly classified images, misclassified images and classification accuracy.
Table 6.6 Classifier results for exudates detection
Stage TestImages
ANFIS Back propagation
CCI MI CA(%) CCI MI CA(%)
Normal 120 118 2 98.3 115 5 95.8
Abnormal 130 130 - 100 126 4 96.9
Overall accuracy of ANFIS with image based approach is 99.2%
with a sensitivity of 100% and a specificity of 98.3 %. For back propagation,
the accuracy is 96.4% with a specificity of 95.8% and sensitivity of 96.9%.
Root mean square error generated for ANFIS is 0.1195 and for back
propagation it is 0.496. Back propagation is computationally heavy and takes
a longer time for its convergence. As the epochs increases convergence time
also increases and high epoch is required to get the desired result in neural
networks. In the case of ANFIS, accurate result is obtained at a lesser epoch
179
with a reduced convergence time. In addition, the algorithm performance is
also measured with Receiver Operating Characteristic (ROC) curve shown in
Figure 6.24.
ROC is a graphical plot of the sensitivity or true positives against
(1 specificity) or false positives by varying the threshold on the probability
map. ROC can also be plotted by the fraction of true positives known as the
True Positive Rate (TPR) against the fraction of false positives known as the
False Positive Rate (FPR). The area under the ROC is 0.99 which contributes
towards better performance of the system. As true positive is closer to 1 and
false positive closer to 0 it can be seen in Figure 6.24 that the proposed
algorithm is effective in detecting DR.
Figure 6.24 Receiver operating characteristic curve
180
Table 6.7 Comparison of performance measure for the proposed
method and related works.
TechniquePixel based approach Image based approach
Sensitivity(%)
Specificity(%)
Sensitivity(%)
Specificity(%)
Morphologicalreconstruction
92.8 92.4 100 94.6
Color features - - 100 70Recursive regiongrowing
88.5 99.7 - -
Anatomical andtexturefeatures(Proposed)
99.11 98.32 100 98.3
A comparison of the proposed method with few techniques using
pixel and image based approaches based on specialized features are shown in
Table 6.7.Gray level variation and morphological reconstruction techniques
described by Walter et al (2002) detected exudates with 92.8% sensitivity,
92.4% predictive value in a pixel based approach and achieved 100%
sensitivity 70% specificity using image based approach for 30 images. This
approach cannot be used to detect soft exudates since the processing step
requires more number of parameters. Improper selection of threshold leads to
a decrease in sensitivity and specificity. Wang et al (2000) used color features
with a statistical classification and achieved 100% sensitivity and 70%
specificity for 154 images. Brightness adjustment procedure is required to
distinguish exudates from the background color near the disc. These
techniques are highly sensitive to image contrast. Sinthanayothin et al (2002)
used recursive region growing segmentation to detect exudates and reported
88.5% sensitivity for 30 images. Selecting seed points are difficult in this
technique. Experimental results shows that careful preprocessing, anatomical
and textural features and appropriate classifier together provide excellent
181
exudates detection performance even on low quality images. 99.11%
sensitivity and 98.32% specificity is achieved in pixel based approach using
color histogram and 100% sensitivity and 98.3% specificity using anatomical
and texture features.
6.8 GRAPHICAL USER INTERFACE
Graphical User Interface (GUI) is developed to provide
ophthalmologists with the information about blood vessels, and exudates. The
technique serves as a novel integrated platform for fundus image analysis
applicable to a clinical setting and can be used for disease progression. GUI
developed for an abnormal DR image is shown in Figure 6.25 and Figure 6.26
describes a GUI for a mild DR image.
Figure 6.25 GUI for an abnormal DR image
182
Figure 6.26 GUI for a mild DR image
6.9 SUMMARY
A novel system for the automatic detection of anatomical structures
such as blood vessels, macula and optic disc in fundus images has been
presented in this chapter. As a large variation in the fundus color is seen
among different subjects, color information used in the preprocessing stage
serves as an important feature to distinguish among the exudates and non
exudates pixels. Careful masking of the optic disc helps in the identification
of even the faint exudates near the disc. Vessel enhancement step paves way
to identify the exudates even at the vessel end. In this analysis, the input
features are based on the specific characteristics of the exudates like color and
183
texture. Texture features are relatively correlated but exhibits discriminatory
character for each of the images. Quantitative results generated using color
histogram technique achieves 99.11% sensitivity, 98.32% specificity and
indicates that false positives are few in this approach.
It can be observed that the features detected using structure and
texture can be used as a supporting tool for the diagnosis of DR. Accuracy of
the system has been evaluated on a database of 250 images. The proposed
fundus image analysis system works better in identifying the exudates with a
sensitivity of 100% and specificity of 98.3% for image based approach. The
algorithm used to detect exudates using anatomical and textural features are
reliable and effective since the true positive fraction is high and the false
positive rate obtained is low. Compared with the published methods, proposed
method provides high specificity and accuracy. The results are encouraging
and these methods contribute to the overall development for the screening of
DR in medical camps and helps clinicians to diagnose or evaluate the progress
of the illness.
Recommended