6
2011 Eighth Inteational Joint Conference on Computer Science and Software Engineering (JCSSE) Automatic Extraction of Retinal Vessels Based on Gradient Orientation Analysis Danu onkaew 1 , Rashmi Tior 2 , Byarit Uyyanonvara 3 , Toshiaki Kondo 4 1, 2 , 3 , 4 Sirin Intational Institute of Technology, Thm asat University 131 Moo 5, Tiwont Road, Bgkadi, Mug, Pai 12120 Thailand I [email protected], 2 [email protected]., 3 [email protected]., 4 [email protected]., Absact-Renal vessel extracon is important for the diagnosis of numerous eye diseases. It plays an important role in automac retinal disease screening systems. This paper presents an efficient method for the automated analysis of retinal images. Fine anatomical features, such as blood vessels, are detected by analyng the gradient orientation of the retinal images. The method is independent of image intensity and gradient magnitude; therefore, it performs accurately despite the common problems inherent to the renal images, such as low contrast and non-uniform illuminaon. Blood vessels with varying diameters are detected by applying this method at multiple scales. The blood vessel network is then extracted from the detected features by manual thresholding followed by a few simple morphological operations. Based on the binary vessel map obtained, we attempt to evaluate the performance of the proposed algorithm on two publicly available databases (D and ST database) of manually labeled images. The receiver operating characteristics (ROC), area under ROC and segmentation accuracy is taken as the performance criteria. The results demonstrate that the proposed method outperforms other unsupervised methods in respect of maximum average accuracy (A). The proposed method results in the area under ROC and the accuracy of 0.9037, 0.9358 for DVE database 0.9117, 0.9423 for ST database respectively. Keywords-component; renal image; feature deteion; feature action; blood vessels; adient oentation I. INTRODUCTION Retinal images are dely used in e diaosis d eat of vious eye diseases d also systemic diseases, such as diabetes, hertensi and retinopay of premai. automated method for alyzing retinal images is particully important to deal with mass screing of the images [1], [2]. The automated alis of retinal images is a difficult task because e images tak at stdd exinations e oſten noisy d poorly conasted. Besides at, e e intsity variations aoss the image caused by non-iform illnation of e retina, d also intensity variations between images. Much effort has been made to cope with these difficulties [1]-[9]. Unlike the previous work, however, this pap prests a novel method at is ee om these common problems. The method does not directly depd e image intensity d gradit maitude, but is based on adient orientati. For is, e meod c detect vital atomical features in retinal images robustly without requiring y preprocessing for image hcemt d e compensation of irregulities of illumination. This pap is gized in four sectis. In section II, a schematic overview of our meodology is demonsated and explains step by step techniques required for r?inal blood vessel segmt@i. Expimental results and evaluation of the algorithm on the images of the DVE [9] d STE [7] database are giv in section III followed by Discussion. Finally, conclusion d e work is given in section IV. II. METHODOLOGY A. Gdient Orientation Analysis This method aims to obtain the adient vectors of the image and normalize into the it adient vectors, as we e only interested in at oritation. The it vectors converge (or divge om) atomical feates as long as e feates e brighter (or dker) backgrod. Therefore, e it vectors e hily discontinuous whe ere e feaes especially wi radially d bilatally symmeical sues. Radially symmeical sctures include circul feates, such as micro eysms d horrhages (dk spots); cott wool spots d exudates (white spots), while bilatally symmeical sctes correspond to linear feates represented by retinal blood vessels. We detect ese feates in retinal images by finding discontinuities in adit orientation. It is phasized that this technique is effective irrespective of image intsity and conast, because the magnitude of the adit vectors is �t[8]. The procede of adient oritation analis (GOA) is depicted, referring to Fig. 1. Let dote a retinal image (Fig. 1(a)). Fig. l(b) shows lged sub image at corresponds to the small region compassed wi e black sque box in Fig. 1(a). The sub image con�,!ow- conasted blood vessel toge wi brit speckles at indicate some paology. The adit vectors of e approximated by ptial divatives: 102 978-1-4577-0687-5/11/$26.00 ©2011 IEEE

Automatic Extraction of Retinal Vessels Based on Gradient ... · the most useful information, we apply GOA only to the green band (see Fig. 4 for two examples). The responses of GOA,

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

  • 2011 Eighth International Joint Conference on Computer Science and Software Engineering (JCSSE)

    Automatic Extraction of Retinal Vessels Based on

    Gradient Orientation Analysis

    Danu onkaew1, Rashmi Turior2, Bunyarit Uyyanonvara3, Toshiaki Kondo4

    1,2,3,4 Sirindhorn International Institute of Technology, Thammasat University 131 Moo 5, Tiwanont Road, Bangkadi, Muang, Pathumthani 12120

    Thailand Ipingpong---'[email protected], [email protected], 3bunyarit@siit. tu.ac.th, 4tkondo@siit. tu.ac.th,

    Abstract-Retinal vessel extraction is important for the diagnosis of numerous eye diseases. It plays an important role in automatic retinal disease screening systems. This paper presents an efficient method for the automated analysis of retinal images. Fine

    anatomical features, such as blood vessels, are detected by analyzing the gradient orientation of the retinal images. The method is independent of image intensity and gradient magnitude; therefore, it performs accurately despite the common problems inherent to the retinal images, such as low contrast and non-uniform illumination. Blood vessels with varying diameters are detected by applying this method at multiple scales. The blood vessel network is then extracted from the detected features

    by manual thresholding followed by a few simple morphological operations. Based on the binary vessel map obtained, we attempt to evaluate the performance of the proposed algorithm on two

    publicly available databases (DRIVE and STARE database) of manually labeled images. The receiver operating characteristics (ROC), area under ROC and segmentation accuracy is taken as the performance criteria. The results demonstrate that the proposed method outperforms other unsupervised methods in

    respect of maximum average accuracy (MAA). The proposed method results in the area under ROC and the accuracy of 0.9037, 0.9358 for DRIVE database 0.9117, 0.9423 for STARE database respectively.

    Keywords-component; retinal image; feature detection; feature extraction; blood vessels; gradient orientation

    I. INTRODUCTION

    Retinal images are widely used in the diagnosis and treatment of various eye diseases and also systemic diseases, such as diabetes, hypertension and retinopathy of prematurity. An automated method for analyzing retinal images is particularly important to deal with mass screening of the images [1], [2].

    The automated analysis of retinal images is a difficult task because the images taken at standard examinations are often noisy and poorly contrasted. Besides that, there are intensity variations across the image caused by non-uniform illumination of the retina, and also intensity variations between images. Much effort has been made to cope with these difficulties [1]-[9]. Unlike the previous work, however, this paper presents a novel method that is free from these common problems. The method does not directly depend on the image

    intensity and gradient magnitude, but is based on gradient orientation. For this, the method can detect vital anatomical features in retinal images robustly without requiring any preprocessing for image enhancement and the compensation of irregularities of illumination.

    This paper is organized in four sections. In section II, a schematic overview of our methodology is demonstrated and explains step by step techniques required for retinal blood vessel segmentation. Experimental results and evaluation of the algorithm on the images of the DRIVE [9] and STARE [7] database are given in section III followed by Discussion. Finally, conclusion and future work is given in section IV.

    II. METHODOLOGY

    A. Gradient Orientation Analysis

    This method aims to obtain the gradient vectors of the image and normalize them into the unit gradient vectors, as we are only interested in gradient orientation. The unit vectors converge on (or diverge from) anatomical features as long as the features are brighter (or darker) than background. Therefore, the unit vectors are highly discontinuous where there are features especially with radially and bilaterally symmetrical structures. Radially symmetrical structures include circular features, such as micro aneurysms and hemorrhages (dark spots); cotton wool spots and exudates (white spots), while bilaterally symmetrical structures correspond to linear features represented by retinal blood vessels. We detect these features in retinal images by finding discontinuities in gradient orientation. It is emphasized that this technique is effective irrespective of image intensity and contrast, because the magnitude of the gradient vectors is �t.£�s9> [8].

    The procedure of gradient orientation analysis (GOA) is

    depicted, referring to Fig. 1. Let denote a retinal

    image (Fig. 1 (a)). Fig. l(b) shows an enlarged sub image that corresponds to the small region encompassed with the black square box in Fig. 1 (a). The sub image contaIA�,W'!owcontrasted blood vessel together with bright speckles that

    indicate some pathology. The gradient vectors of are

    approximated by partial derivatives:

    102 978-1-4577-0687-5/11/$26.00 ©2011 IEEE

  • {gx(X,Y) : g(X,Y) : kx gy(x,y) - g(x,y) ky (1)

    where * denotes convolution, and kx and ky are firstderivative operators in the x (horizontal) and y (vertical) directions, respectively. As shown in Fig. ( c), prominent gradient vectors are seen around the boundaries of the white speckles as they are highly contrasted, while the gradient vectors corresponding to the blood vessel are not visible because of their low contrast. Fig. led) shows the unit gradient vectors obtained by

    {nx (x,y) = gx(x,y)/ Jgx 2 (X ,y) + g/(x,y) (2)

    ny (x, y) = gy(x,y)/ Jgx \x,y) + gy \x,y)

    where we assign O's to the unit vectors (nx (x, y), ny (x, y))

    if the denominator Jgx\x,y)+ g/(x,y) is too small(:S;20). By discarding the gradient magnitude, we can now locate the blood vessel easily in Fig. 1 (d). To find discontinuities in gradient orientation, we compute the fIrst derivatives of the unit vectors:

    (3)

    Where the same fIrst-derivative operators, kx and ky , are once again used. The discontinuity magnitude of gradient

    orientation D(x, y) may be expressed as

    D\x, y) = d= 2 (x, y) + d '9' \x, y) + dy}(x,y) + dy/(x,y) (4)

    It is interesting to mention that D2 (x, y) takes smaller values for random patterns because of the smoothing effect of the

    derivative operator. D2 (x, y) is obtained by summing the fIrst derivative of unit vectors in all directions. It should be noted that GOA responds strongly to discontinuous but highly structured patterns. This feature is especially useful for the detection of linear and circular structures.

    Figure I Concept of gradient orientation analysis. (a) A retinal image. (b) An enlarged subimage corresponding to the small region encompassed with the black square shown in (a). ( c) Gradient vectors of the subimage. (d) Gradient orientations of the subimage indicated by the unit gradient vectors.

    B. Multi-Scale Approach

    It is essential to employ proper fIrst-derivative operators as they determine the size of features that the method can detect. This paper mainly focuses on the detection and extraction of the vascular net (the network of blood vessel). The width of the vessel varies widely as it travels radially from the optic diskl. With the aim of detecting various sizes of features, we apply GOA at three different scales. The Sobel operator is fIrst used as k x and k

    y to detect very fIne features in an original image

    (Fig. 3 (a)). To detect larger features, we modify the Sobel operator as

    1 0 0 0

    o 0 0 0

    k� = 2 0 0 0 o 0 0 0

    1 0 0 0

    -1

    0

    -2

    0

    -1

    k' = , y

    1

    0

    0

    0

    -1

    0 2

    0 0

    0 0

    0 0

    0 -2

    0

    0

    0

    0

    0

    o

    o

    o

    -1

    and use this to the original image (Fig. 3(b)) and also a quartersized sub image (Fig. 3(c)). Denoting the discontinuity magnitude of gradient orientation at each scale Dj (x, y) , D2 (x, y) , and D3 (x, y) , respectively, we defme a response of GOA, DGoix,y), as

    where, D3 (x, y) is resized to the original image size by up-sampling (Fig. 3(d)).

    1 The optic disk is the entrance of blood vessels and optic nerves from the brain to the retina.

    103

  • Figure 2 D2 maps at three scales for an image from STARE database .. (a) Dj 2 map obtained by using the Sobel filter as a derivative operator in an original image. (b) Di map obtained by using a modified Sobel filter in an original image. (c) D; map obtained by using a modified Sobel filter in a quarter-sized subimage. (d) G2 map obtained by integrating the three D2 maps.

    GOA

    C. Extraction of Blood Vessels

    It is important to note that GOA responds to valleys and ridge-like structures, as they are discontinuous in gradient orientation. To extract blood vessels (i.e. , valleys), high GOA responses are estimated.

    We then apply thresholding to D�alley (x, y) to obtain a binary map, in which the threshold value is selected such that

    the largest of D�alley(X,y) are extracted. In our case a threshold value of 0.21 was used for Drive database and 0.09 threshold for Stare Database.

    Subsequently, we apply two simple mathematical morphology operations. A closing operation is first applied, which performs dilation followed by erosion. Next, a filling operation is applied to fill isolated interior pixels O's surrounded by l's. The structuring element used for the morphological operations above is a 3x3 matrix containing only l's.

    104

    III. RESULTS AND DISCUSSIONS

    The performance of the proposed method is evaluated using two publicly available databases of retinal images and manual segmentations: the DRIVE and STARE. We have tested the method on the twenty retinal images from Stare

    database of size 605x700 with 24 bits per pixel (standard RGB) used in [7]. Ten of the images are of patients with no pathology (normals), while the other ten images contain pathology (abnormals). There are 40 color images in DRIVE database [9]. The size of each image is 565x584 pixels. The set of 40 images is divided into training and a test set and each set consist of 20 images. There are a single and two manual segmentations available in training set and test set respectively. All human observers were instructed and trained by ophthalmologist in order to provided a manual segmentation. Since the green band of the RGB data contains

  • Figure 3 D2 maps at three scales for an image from DRIVE database. (a) D,2 map obtained by using the Sobel filter as a derivative operator in an original image. (b) D; map obtained by using a modified Sobel filter in an original image. (c) D; map obtained by using a modified Sobel filter in a quarter-sized subimage. (d) G�OA map obtained by integrating the three D2 maps.

    the most useful information, we apply GOA only to the green band (see Fig. 4 for two examples). The responses of GOA,

    D�OA(X, y), are scaled in 256 gray levels where lower gray levels indicate greater GOA responses. It is apparent that the method detects features regardless of the low contrast and the intensity variations across the images. Fig. 6 shows

    D�alley (x,y) maps, also scaled in 256 gray levels, which illustrate features with valley structures, i.e. the vascular net.

    Since D�OA(X, y) and D�alley (x, y) serve as a robust feature map, it can be used not only for diagnosis but also for applications, such as image registration and personal identification. Blood vessels are then extracted from

    D�alley (x,y) by manual thresholding followed by the morphological operations described above (Fig. 7). Based on the ground truth data in [7], the true positive rate (sensitivity),

    that is, correctly detected blood vessels over true blood vessels, is 87.74% and specificity obtained is 82.67%, on average for Stare Database evaluated by optimizing the threshold values on binary images as shown in Table 1. The results are comparable with or slightly better than those of previously published [4], [6], [7] and [8].

    Finally, we used three performance measures to evaluate the performance of the algorithm. The first is receiver operating curve (ROC). An ROC space is defined by false Positive rate (FPR) and true positive rate (TPR) as x and y axes respectively, which depicts relative trade-offs between true positive (benefits) and false positive (costs). Since TPR is equivalent with sensitivity (SN) and FPR is equal to (1 -specificity), the ROC graph is sometimes called the sensitivity vs (1 - specificity) plot. The SN and SP is obtained as follows:

    105

  • Both measures are evaluated using the four metric values-true positive (TP), sum of pixel marked as vessel in both result

    Figure 4 Left: An example image from the DRIVE database containing pathology. Right: An example image from the STARE database without pathology.

    Figure 5. Dialley (X, y) maps that present features with valley structures in the normal retinal image from Stare (down) and the abnormal retinal image (up)from Drive database.

    and ground truth image; false posItIve (FP), sum of pixel marked as a vessel in result image but not in ground truth image; false negative (FN), sum of pixel marked as a background in result image but not in ground truth image; true negative (TN), sum of pixel marked as a background in both result and ground truth image. The sensitivity and the specificity are computed as shown in equation (6) and (7) respectively. We create the ROC curve by varying the threshold on the soft classification image as shown in Fig. (9),(10). It is observed that the ROC shows better performance for the proposed GOA method as compared to our previous result using Combination of Bottom Hat Transform and Matched filters used in [10].

    The second is the area under ROC (Az). the larger are under the curve (Az) signified a greater discriminatory ability of the segmentation method. The third measure is maximum average accuracy (MAA). The accuracy of an image is calculated by taking the sum for the TP and TN dividing by sum of the total number of vessel pixels (P) and total number of non vessels (N) as illustrated in equation (8). In our experiments, we used the manual segmentation by 1 st observer of DRIVE database and segmentation that provided by Hoover of STARE database as a gold standard for calculating all these three measures- ROC, area under ROC, and MAA, only pixels inside the field of view (FOV) is taken into account.

    106

    Figure 6 Blood vessel maps showing extracted blood vessels in the normal retinal image (down) and the abnormal retinal image (up).

    Fignre 7 Ground truth image of normal retinal image from Stare (down) and the abnormal retinal image (up )from Drive database.

    True blood vessels True background Extracted as blood vessels True positive (TP) False positive (FP) Extracted as background False negative (FN) True negative (TN)

    TP Sensitivity = True positive rate (TPR) = ----

    TP+FN

    TN Specificity = True negative rate (TNR) = ----

    TN+FP

    TN+TP Accuracy = ----

    P+N

    Table 1: Accuracy of Extracting blood vessels.

    True Positive Rate (Sensitivity) 87.74%

    True Negative Rate (Specificity) 82.67%

    IV. CONCLUSION AND FUTURE WORK

    (6)

    (7)

    (8)

    This paper presents a robust and efficient method for detecting blood vessels in retinal images despite the inherent problems of the images, such as low contrast and intensity variations. The method, solely based on the analysis of gradient orientation, is not directly affected by image intensity. Therefore, no preprocessing for image enhancement and illumination equalization is required. Since features are detected by finding high discontinuities in gradient orientation, the method works as a robust crease-edge detector, which is

  • well suited for detecting linear and circular structures. No

    Figure 8 The proposed method ROC curve for the STARE database

    techniques [4] and locally adaptive methods [6], are employed. ill addition, unlike matched-filter based methods [7], numerous large convolution masks are not necessary. Coupled with a multi-scale approach, features with various sizes and orientations can be detected by the Sobel filter (3x3) and its extension (5x5), which makes the method computationally highly efficient.

    As future work, we plan to detect patches or blobs in abnormal images and make the fixed parameters used in the method adaptive to improve performance. We will then work on further analysis of the extracted blood vessel network to detect abnormality in the infant retinal images using a larger database.

    ACKNOWLEDGMENT

    This research is financially supported by Thailand Advanced illstitute of Science and Technology (TAIST), National Science and Technology Development Agency (NSTDA) Tokyo Institute of Technology and Sirindhom illternational illstitute of Technology (SIlT), Thammasat University (TU).

    REFERENCES

    [ 1] T. Waiter, J.C. Kein, P. Massin and A. Erginay, "A contribution of image processing to the diagnosis of diabetic retinopathy - detection of exudates in color fundus images of the human retina", IEEE Trans. Med. Imag., vol. 2 1, no. 10, pp. 1236- 1243,2002.

    complicated operations, such as neural network based

    Figure 9 The proposed method ROC curve for the DRlVE database

    [2] H. Li and O. Chutatape, "Automated feature extraction in color retinal images by a model based approach", IEEE Trans. Biomed. Eng., vol. 5 1, no. 2, pp. 246-254, 2004.

    [3] A.J. Frame, P.E. Dndrill, M.J. Cree, J.A. Olson, K.C. McHardy, P.F. Sharp and J.v. Forrester, "A comparison of computer based classification methods applied to the detection of microaneurysms in ophthalmic fluorescein angiograms", Computers in Biology and Medicine, 28, pp. 225-238, 1998.

    [4] C. Sinthanayothin, J.F. Boyce, H.L. Cook and T.H. Williamson, "Automated localisation of the optic disc, fovea, and retinal blood vessels from digital colour fundus images", Br. J. Ophthalmol., 83, pp. 902-910, 1999.

    [5] T. Lin and Y. Zheng, "Adaptive image enhancement for retinal blood vessel segmentation", Electronics Letters, vol. 38, no. 19, pp. 1090-109 1,2002.

    [6] X. Jiang and D. Mojon, "Adaptive local thresholding by verificationbased multithreshold probing with application to vessel detection in retinal image", IEEE Trans. Pattern Anal. Machine Intell., vol. 25, no. 1, pp. 13 1- 137,2003.

    [7] A. Hoover, V. Kouznetsova and M. Goldbaum, "Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response", IEEE Trans. Med. Imag., vol. 19, no. 3, pp. 203-210, 2000.

    [8] T. Kondo, "Detection of anatomical features in retinal images using gradient orientation", IEEE TENCON 2004, Chiang Mai, Thailand, November 2004.

    [9] M. Niemeijer and B. van Ginneken, 2002 [online]. A valiable:http://www.isi.uu.nllResearchlDatabasesIDRlVE/

    [ 10] D. Onkaew, R. Turior, B. Dyyanonvara, N. Akinori and C. Sinthanayothin, "Automatic Vessel Extraction with combined Bottomhat and Matched-filter", In Proceedings of the International Conference on Information and Communication Technology for Embedded Systems, Pattaya, Thailand, 20 1 1.

    107