15
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS 1 Improving Iris Recognition Performance Using Segmentation, Quality Enhancement, Match Score Fusion, and Indexing Mayank Vatsa, Student Member, IEEE, Richa Singh, Student Member, IEEE, and Afzel Noore, Member, IEEE Abstract—This paper proposes algorithms for iris segmenta- tion, quality enhancement, match score fusion, and indexing to improve both the accuracy and the speed of iris recognition. A curve evolution approach is proposed to effectively segment a nonideal iris image using the modified Mumford–Shah functional. Different enhancement algorithms are concurrently applied on the segmented iris image to produce multiple enhanced versions of the iris image. A support-vector-machine-based learning algorithm selects locally enhanced regions from each globally enhanced image and combines these good-quality regions to create a single high-quality iris image. Two distinct features are extracted from the high-quality iris image. The global textural feature is extracted using the 1-D log polar Gabor transform, and the local topological feature is extracted using Euler numbers. An intelligent fusion algorithm combines the textural and topological matching scores to further improve the iris recognition performance and reduce the false rejection rate, whereas an indexing algorithm enables fast and accurate iris identification. The verification and identi- fication performance of the proposed algorithms is validated and compared with other algorithms using the CASIA Version 3, ICE 2005, and UBIRIS iris databases. Index Terms—Information fusion, iris indexing, iris recognition, Mumford–Shah curve evolution, quality enhancement, support vector machine (SVM). I. I NTRODUCTION C URRENT iris recognition systems claim to perform with very high accuracy. However, these iris images are cap- tured in a controlled environment to ensure high quality. Daugman [1]–[4] proposed an iris recognition system repre- senting an iris as a mathematical function. Wildes [5], Boles and Boashash [6], and several other researchers proposed dif- ferent recognition algorithms [7]–[32]. With a sophisticated iris capture setup, users are required to look into the camera from a fixed distance, and the image is captured. Iris images captured in an uncontrolled environment produce nonideal iris images with varying image quality. If the eyes are not properly opened, certain regions of the iris cannot be captured due to Manuscript received November 29, 2006; revised July 19, 2007 and February 25, 2008. This paper was recommended by Associate Editor S. Sarkar. The authors are with Lane Department of Computer Science and Elec- trical Engineering, West Virginia University, Morgantown, WV 26506-6109 USA (e-mail: [email protected]; [email protected]; noore@csee. wvu.edu). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TSMCB.2008.922059 occlusion, which further affects the process of segmentation and, consequently, the recognition performance. Images may also suffer from motion blur, camera diffusion, presence of eyelids and eyelashes, head rotation, gaze direction, camera angle, reflections, contrast, luminosity, and problems due to contraction and dilation. Fig. 1 from the UBIRIS database [26], [27] shows images with some of the aforementioned problems. These artifacts in iris images increase the false rejection rate (FRR), thus decreasing the performance of the recognition system. Experimental results from the Iris Challenge Eval- uation (ICE) 2005 and ICE 2006 [30], [31] also show that most of the recognition algorithms have a high FRR. Table I compares existing iris recognition algorithms with respect to image quality, segmentation, enhancement, feature extraction, and matching techniques. A detailed literature survey of iris recognition algorithms can be found in [28]. This research effort focuses on reducing the false rejection by accurate iris detection, quality enhancement, fusion of tex- tural and topological iris features, and iris indexing. For iris detection, some researchers assume that the iris is circular or elliptical. In nonideal images such as off-angle iris images, motion blur, and noisy images, this assumption is not valid because the iris appears to be noncircular and nonelliptical. In this paper, we propose a two-level hierarchical iris seg- mentation algorithm to accurately and efficiently detect iris boundaries from nonideal iris images. The first level of the iris segmentation algorithm uses intensity thresholding to detect an approximate elliptical boundary, and the second level applies Mumford–Shah functional to obtain the accurate iris boundary. We next describe a support vector machine (SVM) based iris quality enhancement algorithm [29]. The SVM quality en- hancement algorithm identifies good-quality regions from dif- ferent globally enhanced iris images and combines them to generate a single high-quality feature-rich iris image. Tex- tural and topological features [17], [18] are then extracted from the quality-enhanced image for matching. Most of the iris recognition algorithms extract features that provide only global information or local information of iris patterns. In this paper, the feature extraction algorithm extracts global textural features and local topological features. The textural features are extracted using the 1-D log polar Gabor transform, which is invariant to rotation and translation, and the topolog- ical features are extracted using the Euler number technique, which is invariant under translation, rotation, scaling, and polar transformation. 1083-4419/$25.00 © 2008 IEEE

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND …richa/papers/Iris-SMC.pdf · Improving Iris Recognition Performance Using Segmentation, Quality Enhancement, Match Score Fusion, and Indexing

  • Upload
    docong

  • View
    220

  • Download
    1

Embed Size (px)

Citation preview

Page 1: IEEE TRANSACTIONS ON SYSTEMS, MAN, AND …richa/papers/Iris-SMC.pdf · Improving Iris Recognition Performance Using Segmentation, Quality Enhancement, Match Score Fusion, and Indexing

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS 1

Improving Iris Recognition Performance UsingSegmentation, Quality Enhancement, Match

Score Fusion, and IndexingMayank Vatsa, Student Member, IEEE, Richa Singh, Student Member, IEEE, and Afzel Noore, Member, IEEE

Abstract—This paper proposes algorithms for iris segmenta-tion, quality enhancement, match score fusion, and indexing toimprove both the accuracy and the speed of iris recognition. Acurve evolution approach is proposed to effectively segment anonideal iris image using the modified Mumford–Shah functional.Different enhancement algorithms are concurrently applied on thesegmented iris image to produce multiple enhanced versions of theiris image. A support-vector-machine-based learning algorithmselects locally enhanced regions from each globally enhancedimage and combines these good-quality regions to create a singlehigh-quality iris image. Two distinct features are extracted fromthe high-quality iris image. The global textural feature is extractedusing the 1-D log polar Gabor transform, and the local topologicalfeature is extracted using Euler numbers. An intelligent fusionalgorithm combines the textural and topological matching scoresto further improve the iris recognition performance and reducethe false rejection rate, whereas an indexing algorithm enablesfast and accurate iris identification. The verification and identi-fication performance of the proposed algorithms is validated andcompared with other algorithms using the CASIA Version 3, ICE2005, and UBIRIS iris databases.

Index Terms—Information fusion, iris indexing, iris recognition,Mumford–Shah curve evolution, quality enhancement, supportvector machine (SVM).

I. INTRODUCTION

CURRENT iris recognition systems claim to perform withvery high accuracy. However, these iris images are cap-

tured in a controlled environment to ensure high quality.Daugman [1]–[4] proposed an iris recognition system repre-senting an iris as a mathematical function. Wildes [5], Bolesand Boashash [6], and several other researchers proposed dif-ferent recognition algorithms [7]–[32]. With a sophisticatediris capture setup, users are required to look into the camerafrom a fixed distance, and the image is captured. Iris imagescaptured in an uncontrolled environment produce nonideal irisimages with varying image quality. If the eyes are not properlyopened, certain regions of the iris cannot be captured due to

Manuscript received November 29, 2006; revised July 19, 2007 andFebruary 25, 2008. This paper was recommended by Associate Editor S. Sarkar.

The authors are with Lane Department of Computer Science and Elec-trical Engineering, West Virginia University, Morgantown, WV 26506-6109USA (e-mail: [email protected]; [email protected]; [email protected]).

Color versions of one or more of the figures in this paper are available onlineat http://ieeexplore.ieee.org.

Digital Object Identifier 10.1109/TSMCB.2008.922059

occlusion, which further affects the process of segmentationand, consequently, the recognition performance. Images mayalso suffer from motion blur, camera diffusion, presence ofeyelids and eyelashes, head rotation, gaze direction, cameraangle, reflections, contrast, luminosity, and problems due tocontraction and dilation. Fig. 1 from the UBIRIS database [26],[27] shows images with some of the aforementioned problems.These artifacts in iris images increase the false rejection rate(FRR), thus decreasing the performance of the recognitionsystem. Experimental results from the Iris Challenge Eval-uation (ICE) 2005 and ICE 2006 [30], [31] also show thatmost of the recognition algorithms have a high FRR. Table Icompares existing iris recognition algorithms with respect toimage quality, segmentation, enhancement, feature extraction,and matching techniques. A detailed literature survey of irisrecognition algorithms can be found in [28].

This research effort focuses on reducing the false rejectionby accurate iris detection, quality enhancement, fusion of tex-tural and topological iris features, and iris indexing. For irisdetection, some researchers assume that the iris is circular orelliptical. In nonideal images such as off-angle iris images,motion blur, and noisy images, this assumption is not validbecause the iris appears to be noncircular and nonelliptical.In this paper, we propose a two-level hierarchical iris seg-mentation algorithm to accurately and efficiently detect irisboundaries from nonideal iris images. The first level of the irissegmentation algorithm uses intensity thresholding to detect anapproximate elliptical boundary, and the second level appliesMumford–Shah functional to obtain the accurate iris boundary.

We next describe a support vector machine (SVM) based irisquality enhancement algorithm [29]. The SVM quality en-hancement algorithm identifies good-quality regions from dif-ferent globally enhanced iris images and combines them togenerate a single high-quality feature-rich iris image. Tex-tural and topological features [17], [18] are then extractedfrom the quality-enhanced image for matching. Most of theiris recognition algorithms extract features that provide onlyglobal information or local information of iris patterns. Inthis paper, the feature extraction algorithm extracts globaltextural features and local topological features. The texturalfeatures are extracted using the 1-D log polar Gabor transform,which is invariant to rotation and translation, and the topolog-ical features are extracted using the Euler number technique,which is invariant under translation, rotation, scaling, and polartransformation.

1083-4419/$25.00 © 2008 IEEE

Page 2: IEEE TRANSACTIONS ON SYSTEMS, MAN, AND …richa/papers/Iris-SMC.pdf · Improving Iris Recognition Performance Using Segmentation, Quality Enhancement, Match Score Fusion, and Indexing

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

2 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS

Fig. 1. Iris images representing the challenges of iris recognition. (a) Iristexture occluded by eyelids and eyelashes. (b) Iris images of an individual witha different gaze direction. (c) Iris images of an individual showing the effectsof contraction and dilation. (d) Iris images of the same individual at differentinstances: the first image is of good quality; the second image has motionblurriness, and limited information is present. (e) Images of an individualshowing the effect of the natural luminosity factor [26].

The state-of-the-art iris recognition algorithms have a verylow false acceptance rate, but reducing the number of falserejections is still a major challenge. In multibiometric literature[33]–[36], it has been suggested that fusion of informationextracted from different classifiers provides better performancecompared to single classifiers. In this paper, we propose using

2ν-SVM to develop a fusion algorithm that combines the matchscores obtained by matching textural and topological featuresfor improved performance. The performance of verification andidentification suffers due to nonideal acquisition issues. How-ever, identification is more difficult compared to verificationbecause of the problem of a high penetration rate and a falseaccept rate (FAR). To improve the identification performance,we propose an iris indexing algorithm. In the proposed indexingalgorithm, the Euler code is first used to filter possible matches.This subset is further processed using the textural features and2ν-SVM fusion for accurate identification.

Section II presents the proposed nonideal iris segmentationalgorithm, and Section III describes the novel quality enhance-ment algorithm. Section IV briefly explains the extraction ofglobal features using the 1-D log polar Gabor transform and theextraction of local features using the Euler number. Section Vdescribes the intelligent match score fusion algorithm, andSection VI presents the indexing algorithm to reduce the aver-age identification time. The details of iris databases and existingalgorithms that are used for the validation of the proposedalgorithm are presented in Section VII. Sections VIII and IXsummarize the verification and identification performance ofthe proposed algorithms with existing recognition and fusionalgorithms.

II. NONIDEAL IRIS SEGMENTATION ALGORITHM

Processing nonideal iris images is a challenging task becausethe iris and the pupil are noncircular, and the shape variesdepending on how the image is captured. The first step iniris segmentation is the detection of pupil and iris boundariesfrom the input eye image and unwrapping the extracted irisinto a rectangular form. Researchers have proposed differentalgorithms for iris detection. Daugman [1] applied an inte-grodifferential operator to detect the boundaries of the irisand the pupil. The segmented iris is then converted into arectangular form by applying polar transformation. Wildes [5]used the first derivative of image intensity to find the locationof edges corresponding to the iris boundaries. This system ex-plicitly models the upper and lower eyelids with parabolic arcs,whereas Daugman excludes the upper and lower portions of theimage. Boles and Boashash [6] localized and normalized the irisby using edge detection and other computer vision algorithms.Ma et al.[12], [13] used the Hough transform to detect the irisand pupil boundaries. Normally, the pupil has a dark color,and the iris has a light color with varying pigmentation. Incertain nonideal conditions, the iris can be dark, and the pupilcan appear illuminated. For example, because of the specularreflections from the cornea or coaxial illumination directly intothe eye, light is reflected into the retina and back through thepupil, which makes the pupil appear bright. Also, the boundaryof the nonideal iris image is irregular and cannot be consideredexactly circular or elliptical. For such nonideal and irregulariris images, researchers have recently proposed segmentationalgorithms that combine conventional intensity techniques withactive contours for pupil and iris boundary detection [32],[37]–[39]. These algorithms use intensity-based techniques forcenter and pupil boundary detection. The pupil boundary is

Page 3: IEEE TRANSACTIONS ON SYSTEMS, MAN, AND …richa/papers/Iris-SMC.pdf · Improving Iris Recognition Performance Using Segmentation, Quality Enhancement, Match Score Fusion, and Indexing

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

VATSA et al.: IMPROVING IRIS RECOGNITION PERFORMANCE 3

TABLE ICOMPARISON OF EXISTING IRIS RECOGNITION ALGORITHMS

used to initialize the active contour, which evolves to find theouter boundary of the iris. This method of evolution from thepupil to the outer iris boundary is computationally expensive.We, therefore, propose a two-stage iris segmentation algorithmin which we first estimate the inner and outer boundaries of theiris using an elliptical model. In the second stage, we apply themodified Mumford–Shah functional [40] in a narrow band overthe estimated boundaries to compute the exact inner and outerboundaries of the iris.

To identify the approximate boundary of the pupil in nonidealeye images, an elliptical region with major axis a = 1, minoraxis b = 1, and center (x, y) is selected as the center of theeye, and the intensity values are computed for a fixed numberof points on the circumference. The parameters of the ellipse(a, b, x, y, θ) are iteratively varied with a step size of two pixelsto increase the size of the ellipse, and, every time, a fixednumber of points are randomly chosen on the circumference (inthe experiments, it is set to be 40 points) to calculate the totalintensity value. This process is repeated to find the boundarywith maximum variation in intensity and the center of the pupil.The approximate outer boundary of the iris is also detected ina similar manner. The parameters for the outer boundary a1,b1, x1, y1, and θ1 are varied by setting the initial parametersto the pupil boundary parameters. A fixed number of points (inthe experiments, it is set to be 120 points) are chosen on thecircumference, and the sum of the intensity values is computed.Values corresponding to the maximum intensity change give theouter boundary of the iris, and the center of this ellipse gives thecenter of the iris. This method, thus, provides approximate irisand pupil boundaries, corresponding centers, and major and mi-nor axes. Some researchers assume the center of the pupil to bethe center of the iris and compute the outer boundary. Althoughthis helps to simplify the modeling, in reality, this assumptionis not valid for the nonideal iris. Computing the outer boundaryusing the proposed algorithm provides accurate segmentationeven when the pupil and the iris are not concentric. Using these

approximate inner and outer boundaries, we now perform thecurve evolution with modified Mumford–Shah functional [40],[41] for iris segmentation.

In the proposed curve evolution method for iris segmentation,the model begins with the following energy functional:

Energy(c) = α

∫Ω

∥∥∥∥∂C

∂c

∥∥∥∥ φ dc + β

∫ ∫in(C)

|I(x, y) − c1|2 dxdy

+ λ

∫ ∫out(C)

|I(x, y) − c2|2 dxdy (1)

where C is the evolution curve such that C = (x, y) :ψ(x, y) = 0, c is the curve parameter, φ is the weightingfunction or the stopping term, Ω represents the image domain,I(x, y) is the original iris image, c1 and c2 are the average val-ues of pixels inside and outside C, respectively, and α, β, andλ are positive constants such that α < β ≤ λ. Parameterizing(1) and deducing the associated Euler–Lagrange equation leadto the following active contour model:

ψ′t = αφ(ν+εk)| ψ|+φ ψ+βδ(I−c1)2+λδψ(I−c2)2

(2)

where ν is the advection term, εk is the curvature-based smooth-ing term, is the gradient operator, and δ = 0.5/(π(x2 +0.25)). The stopping term φ is defined as follows:

φ =1

1 + (| I|)2. (3)

The active contour ψ is initialized to the approximate pupilboundary, and the exact pupil boundary is computed by evolv-ing the contour in a narrow band [42] of ±5 pixels. Similarly,for computing the exact outer iris boundary, the approximate

Page 4: IEEE TRANSACTIONS ON SYSTEMS, MAN, AND …richa/papers/Iris-SMC.pdf · Improving Iris Recognition Performance Using Segmentation, Quality Enhancement, Match Score Fusion, and Indexing

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

4 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS

Fig. 2. Iris detection using the proposed nonideal iris segmentation algorithm. (a) Original image. (b) Pupil boundary. (c) Final iris and pupil boundary.

iris boundary is used as the initial contour ψ, and the curveis evolved in a narrow band [42] of ±10 pixels. Using thestopping term φ, the curve evolution stops at the exact outer irisboundary. Since we are using the approximate iris boundariesas the initial ψ, the complexity of curve evolution is reducedand is suitable for real-time applications. Fig. 2 shows the pupiland iris boundaries extracted using the proposed nonideal irissegmentation algorithm.

In nonideal cases, eyelids and eyelashes may be presentas noise and decrease the recognition performance. Using thetechnique described in [1], eyelids are isolated by fitting linesto the upper and lower eyelids. A mask based on the detectedeyelids and eyelashes is then used to extract the iris withoutnoise. Image processing of the iris is computationally intensive,as the area of interest is of donut shape, and grabbing thepixels in this region requires repeated rectangular to polarconversion. To simplify this, the detected iris is unwrapped intoa rectangular region by converting into polar coordinates. LetI(x, y) be the segmented iris image and I(r, θ) be the polarrepresentation obtained using

r =√

(x − xc)2 + (y − yc)2, 0 ≤ r ≤ rmax (4)

θ = tan−1

(y − yc

x − xc

). (5)

r and θ are defined with respect to the center coordinates(xc, yc). The center coordinates obtained during approximateelliptical iris boundary fitting are used as the center point forCartesian to polar transformation. The transformed polar irisimage is further used for enhancement, feature extraction, andmatching.

III. GENERATION OF A SINGLE HIGH-QUALITY

IRIS IMAGE USING ν-SVM

For iris image enhancement, researchers consecutively ap-ply selected enhancement algorithms, such as deblurring, de-noising, entropy correction, and background subtraction, anduse the final enhanced image for further processing. Huanget al. [43] used superresolution and a Markov network foriris image quality enhancement; however, their method doesnot perform well with unregistered iris images. Ma et al. [12]proposed background-subtraction-based iris enhancement that

filters the high-frequency noise. Poursaberi and Araabi [25]proposed the use of the low-pass Wiener 2-D filter for irisimage enhancement. However, these filtering techniques are noteffective in mitigating the effects of blur, out of focus, andentropy-based irregularities. Another challenge with existingenhancement techniques is that they enhance the low-qualityregions that are present in the image, but are likely to deterioratethe good-quality regions and alter the features of the iris image.A nonideal iris image containing multiple irregularities may re-quire the application of specific algorithms to local regions thatneed enhancement. However, identifying and isolating theselocal regions in an iris image can be tedious, time consuming,and not pragmatic. In this paper, we address the problem byconcurrently applying a set of selected enhancement algorithmsglobally to the original iris image [29]. Thus, each resultingimage contains enhanced local regions. These enhanced localregions are identified from each of the transformed imagesusing an SVM-based [44] learning algorithm and are thensynergistically combined to generate a single high-quality irisimage.

Let I be the original iris image. For every iris image inthe training database, a set of transformed images is gener-ated by applying standard enhancement algorithms for noiseremoval, defocus, motion blur removal, histogram equalization,entropy equalization, homomorphic filtering, and backgroundsubtraction. The set of enhancement functions is expressed asfollows:

I1 = fnoise(I)

I2 = fblur(I)

I3 = ffocus(I)

I4 = fhistogram(I)

I5 = fentropy(I)

I6 = ffilter(I)

I7 = fbackground(I) (6)

where fnoise is the algorithm for noise removal, fblur is thealgorithm for blur removal, ffocus is the algorithm for adjustingthe focus of the image, fhistogram is the histogram equalizationfunction, fentropy is the entropy filter, ffilter is the homomor-phic filter for contrast enhancement, and fbackground is the

Page 5: IEEE TRANSACTIONS ON SYSTEMS, MAN, AND …richa/papers/Iris-SMC.pdf · Improving Iris Recognition Performance Using Segmentation, Quality Enhancement, Match Score Fusion, and Indexing

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

VATSA et al.: IMPROVING IRIS RECOGNITION PERFORMANCE 5

background subtraction process. I1, I2, I3, I4, I5, I6, and I7 arethe resulting globally enhanced images that are obtained whenthe above enhancement operations are applied to the originaliris image I . Applying several global enhancement algorithmsdoes not uniformly enhance all the regions of the iris image.A learning algorithm is proposed to train and classify the pixelquality from corresponding locations of the globally enhancediris images. This knowledge is used by the algorithm to identifythe good-quality regions from each of the transformed andoriginal iris images, which are combined to form a single high-quality iris image. The learning algorithm uses ν-SVM [45],which is expressed as follows:

f(x) = sgn

(m∑

i=1

αiyik(x, xi) + b

)m∑

i=1

αiyi = 0,m∑

i=1

αi ≥ ν (7)

where ν ε [0, 1], xi is the input to ν-SVM, yi is the correspond-ing label, m is the number of tuples, αi is the dual variable,and k is the RBF kernel. Furthermore, fast implementation ofν-SVM [46] is used to decrease the time complexity.

Training involves classifying the local regions of the inputand globally enhanced iris image as good or bad. Any qual-ity assessment algorithm can be used for this task. However,in this paper, we have used the redundant discrete wavelettransformation-based quality assessment algorithm described in[47]. To minimize the possibility of errors due to the qualityassessment algorithm, we also manually verify the labels andcorrect them in case of errors. The labeled training data are thenused to train the ν-SVM. The training algorithm is described asfollows.

• The training iris images are decomposed to l levels usingthe discrete wavelet transform (DWT). The 3l detail sub-bands of each image contain the edge features, and, thus,these bands are used for training.

• The subbands are divided into windows of size 3 × 3, andthe activity level of each window is computed.

• The ν-SVM is trained using labeled iris images to deter-mine the quality of every wavelet coefficient. The activitylevels computed in the previous step are used as input tothe ν-SVM.

• The output of the training algorithm is ν-SVM with aseparating hyperplane. The trained ν-SVM labels the co-efficient G or 1 if it is good and B or 0 if the coefficientis bad.

Next, the trained ν-SVM is used to classify the pixels fromthe input image and to generate a new feature-rich high-qualityiris image. The classification algorithm is described as follows.

• The original iris image and the corresponding globallyenhanced iris images that are generated using (6) aredecomposed to l DWT levels.

• The ν-SVM classifier is then used to classify the coeffi-cients of the input bands as good or bad. A decision matrix,i.e., Decision, is generated to store the quality of eachcoefficient in terms of G and B. At any position (i, j),

if the SVM output O(i, j) is positive, then that coefficientis labeled as G; otherwise, it is labeled as B, i.e.,

Decision(i, j) =

G if O(i, j) ≥ 0B if O(i, j) < 0.

(8)

• The above operation is performed on all eight imagesincluding the original iris image, and a decision matrixcorresponding to every image is generated.

• For each of the eight decision matrices, the average ofall coefficients with label G is computed, and the coeffi-cients having label B are discarded. In this manner, onefused approximation band and 3l fused detail subbandsare generated. Individual processing of every coefficientensures that the irregularities present locally in the imageare removed. Furthermore, the selection of good-qualitycoefficients and the removal of all bad coefficients addressmultiple irregularities that are present in one region.

• Inverse DWT is applied on the fused approximation anddetail subbands to generate a single feature-rich high-quality iris image.

In this manner, the quality enhancement algorithm enhancesthe quality of the input iris image, and a feature-rich image isobtained for feature extraction and matching. Fig. 3 shows anexample of the original iris image, different globally enhancedimages, and the combined image generated using the proposediris image quality enhancement algorithm.

IV. IRIS TEXTURAL AND TOPOLOGICAL FEATURE

EXTRACTION AND MATCHING ALGORITHMS

Researchers have proposed several feature extraction algo-rithms to extract unique and invariant features from the iris im-age. These algorithms use either texture- or appearance-basedfeatures. The first algorithm was proposed by Daugman [1],which used 2-D Gabor for feature extraction. Wildes [5] ap-plied isotropic bandpass decomposition that is derived from theapplication of Laplacian of Gaussian filters to the iris image.It was followed by several different research papers such asthose of Ma et al. [12], [13] in which the multichannel even-symmetric Gabor wavelet and the multichannel spatial filterswere used to extract textural information from iris patterns. Theusefulness of the iris features depends on the properties of thebasis function and the feature encoding process.

In this paper, the iris recognition algorithm uses global andlocal properties of an iris image. A 1-D log polar Gabortransform-based [48] textural feature [17], [18] provides theglobal properties that are invariant to scaling, shift, rotation,illumination, and contrast. Topological features [17], [18] ex-tracted using the Euler number [49] provide local informationof iris patterns and are invariant to rotation, translation, andscaling of the image. Sections IV-1 and 2 briefly describe thetextural and topological feature extraction algorithm.1) Textural Feature Extraction Using the 1-D Log Polar

Gabor Wavelet: The textural feature extraction algorithm [17],[18] uses the 1-D log polar Gabor transform [48]. Like theGabor transform [50], the log polar Gabor transform is alsobased on polar coordinates; however, unlike the frequency

Page 6: IEEE TRANSACTIONS ON SYSTEMS, MAN, AND …richa/papers/Iris-SMC.pdf · Improving Iris Recognition Performance Using Segmentation, Quality Enhancement, Match Score Fusion, and Indexing

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

6 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS

Fig. 3. Original iris image, seven globally enhanced images, and the SVM-enhanced iris image.

dependence on a linear graduation, the dependence is realizedby a logarithmic frequency scale [50], [51]. Therefore, the func-tional form of the 1-D log polar Gabor transform is given by

Gr0θ0(θ) = exp[−2π2σ2

[ln( r−r0

f )2τ2+2 ln(f0 sin(θ−θ0))2

]](9)

where (r, θ) are the polar coordinates, r0 and θ0 are the initialvalues, f is the center frequency of the filter, and f0 is theparameter that controls the bandwidth of the filter. σ and τ aredefined as follows:

σ =1

π ln(r0) sin(π/θ0)

√ln 22

(10)

τ =2 ln(r0) sin(π/θ0)

ln 2

√ln 22

. (11)

The Gabor transform is symmetric with respect to the prin-cipal axis. During encoding, the Gabor function overrepresentsthe low-frequency components and underrepresents the high-frequency components [48], [50], [51]. In contrast, the log polarGabor transform shows maximum translation from the centerof gravity in the direction of lower frequency and flatteningof the high-frequency part. The most important feature of thisfilter is invariance to rotation and scaling. Also, log polar Gaborfunctions have extended tails and encode natural images moreefficiently than Gabor functions.

To generate an iris template from the 1-D log polar Gabortransform, the 2-D unwrapped iris pattern is decomposed into anumber of 1-D signals, where each row corresponds to a circu-lar ring on the iris region. For encoding, the angular directionis used rather than the radial direction because maximum inde-pendence occurs along this direction. One-dimensional signalsare convolved with the 1-D log polar Gabor transform in thefrequency domain. The values of the convolved iris image are

Fig. 4. Binary iris templates generated using the 1-D log polar Gabor trans-form. (a), (b) Iris templates of the same individual at two different instances.

complex in nature. Using these real and imaginary values, thephase information is extracted and encoded in a binary pattern.If the convolved iris image is Ig(r, θ), then the phase featureP (r, θ) is computed using

P (r, θ) = tan−1

(Im Ig(r, θ)Re Ig(r, θ)

)(12)

Ip(r, θ) =

[1, 1] if 00 < P (r, θ) ≤ 90

[0, 1] if 90 < P (r, θ) ≤ 180

[0, 0] if 180 < P (r, θ) ≤ 270

[1, 0] if 270 < P (r, θ) ≤ 360.

(13)

Phase features are quantized using the phase quantizationprocess represented in (13), where Ip(r, θ) is the resultingbinary iris template of 4096 bits. Fig. 4 shows the iris templatethat is generated using this algorithm.

To verify a person’s identity, the query iris template ismatched with the stored templates. For matching the texturaliris templates, we use Hamming distance [1]. The match scoreMStexture for any two texture-based masked iris templatesAi and Bi is computed using the Hamming distance measuregiven by

MStexture =1N

N∑i=1

Ai ⊕ Bi (14)

where N is the number of bits represented by each template,and ⊕ is the XOR operator. For handling rotation, the templates

Page 7: IEEE TRANSACTIONS ON SYSTEMS, MAN, AND …richa/papers/Iris-SMC.pdf · Improving Iris Recognition Performance Using Segmentation, Quality Enhancement, Match Score Fusion, and Indexing

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

VATSA et al.: IMPROVING IRIS RECOGNITION PERFORMANCE 7

Fig. 5. Binary images corresponding to 8-bit planes of the masked polarimage.

are shifted left and right bitwise, and the match scores arecalculated for every successive shift [1]. The smallest value isused as the final match score MStexture. The bitwise shiftingin the horizontal direction corresponds to the rotation of theoriginal iris region at an angle that is defined by the angularresolution. This also takes into account the misalignments inthe normalized iris pattern, which are caused by the rotationaldifferences during imaging.2) Topological Feature Extraction Using the Euler Number:

Convolution with the 1-D log polar Gabor transform extractsthe global textural characteristics of the iris image. To furtherimprove the performance, local features represented by thetopology of the iris image are extracted using Euler numbers[18], [49]. For a binary image, the Euler number is defined asthe difference between the number of connected componentsand the number of holes. Euler numbers are invariant to rota-tion, translation, scaling, and polar transformation of the image[18]. Each pixel of the unwrapped iris can be represented asan 8-bit binary vector b7, b6, b5, b4, b3, b2, b1, b0. These bitsform eight planes with binary values. As shown in Fig. 5,four planes formed from the four most significant bits (MSBs)represent the structural information of the iris, and the remain-ing four planes represent the brightness information [49]. Thebrightness information is random in nature and is not useful forcomparing the structural topology of two iris images.

For comparing two iris images using the Euler code, a com-mon mask is generated for both the iris images to be matched.The common mask is generated by performing a bitwise OR

operation of the individual masks of the two iris images andis then applied to both the polar iris images. For each of thetwo iris images with a common mask, a 4-tuple Euler codeis generated, which represents the Euler number of the imagecorresponding to the four MSB planes. Table II shows the Eulercodes of a person at three different instances.

We use the Mahalanobis distance to match the two Eulercodes. The Mahalanobis distance between two vectors is de-

TABLE IIEULER CODE OF AN INDIVIDUAL AT THREE DIFFERENT INSTANCES

fined as follows:

D(x, y) =√

(x − y)tS−1(x − y) (15)

where x and y are the two Euler codes to be matched, andS is the positive-definite covariance matrix of x and y. If theEuler code has a large variance, it increases the false reject rate.The Mahalanobis distance ensures that the features having ahigh variance do not contribute to the distance. Applying theMahalanobis distance measure for comparison, thus, avoids theincrease in the false reject rate. The topology-based match scoreis computed as follows:

MStopology =D(x, y)

log10 max(D)(16)

where max(D) is the maximum possible value of the Maha-lanobis distance between two Euler codes. The match score ofEuler codes is the normalized Mahalanobis distance betweentwo Euler codes.

V. FUSION OF TEXTURAL AND TOPOLOGICAL

MATCHING SCORES

Iris recognition algorithms have succeeded in achieving alow false acceptance rate; however, reducing the rejection rateremains a major challenge. To make iris recognition algorithmsmore practical and adaptable to diverse applications, the FRRneeds to be significantly reduced. In [33], [35], [36], and [52],it has been suggested that the fusion of match scores from twoor more classifiers provides better performance compared to asingle classifier. In general, match score fusion is performedusing the sum rule, the product rule, or other statistical rules.Recently, in [35], a kernel-based match score fusion algorithmhas been proposed to fuse the match scores of fingerprints andsignatures. In this section, we propose using 2ν-SVM [53] tofuse the information obtained by matching the textural andtopological features of the iris image that are described inSection IV. The proposed fusion algorithm reduces the FRRwhile maintaining a low false acceptance rate.

Let the training set be Z = (xi, yi), where i = 1, . . . , N .N is the number of multimodal scores used for training, andyi ∈ (1,−1), where 1 represents the genuine class, and −1represents the impostor class. An SVM is trained using theselabeled training data. The mapping function ϕ(·) is used tomap the training data into a higher dimensional feature spacesuch that Z → ϕ(Z). The optimal hyperplane, which separatesthe higher dimensional feature space into two different classesin the higher dimensional feature space, can be obtained using2ν-SVM [53].

We have xi, yi as the set of N multimodal scores with xi ∈d. Here, xi is the ith score that belongs to the binary class yi.

Page 8: IEEE TRANSACTIONS ON SYSTEMS, MAN, AND …richa/papers/Iris-SMC.pdf · Improving Iris Recognition Performance Using Segmentation, Quality Enhancement, Match Score Fusion, and Indexing

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

8 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS

Fig. 6. Steps involved in the proposed 2ν-SVM match score fusion algorithm.

The objective of training 2ν-SVM is to find the hyperplane thatseparates two classes with the widest margins, i.e.,

wϕ(x) + b = 0 (17)

subject to

yi (wϕ(x) + b) ≥ (ρ − ψi), ξi ≥ 0 (18)

to minimize

12‖w‖2 −

∑i

Ci(νρ − ξi) (19)

where ρ is the position of margin, and ν is the error parameter.ϕ(x) is the mapping function that is used to map the dataspace to the feature space and to provide generalization forthe decision function that may not be a linear function of thetraining data. Ci(νρ − ξi) is the cost of errors, w is the normalvector, b is the bias, and ξi is the slack variable for classificationerrors. ν can be calculated using ν+ and ν−, which are theerror parameters for training the positive and negative classes,respectively, i.e.,

ν =2ν+ ν−ν+ + ν−

, 0 < ν+ < 1 and 0 < ν− < 1. (20)

Error penalty Ci is calculated as follows:

C =

C+, if yi = +1C−, if yi = −1 (21)

where

C+ =[n+

(1 +

ν+

ν−

)]−1

(22)

C− =[n−

(1 +

ν−ν+

)]−1

(23)

and n+ and n− are the number of training points for the positiveand negative classes, respectively. 2ν-SVM training can beformulated as follows:

max(αi)

−1

2

∑i,j

αi αj yi yj K(xi, xj)

(24)

where

0 ≤ αi ≤ Ci,∑

i

αiyi = 0,∑

i

αi ≥ ν (25)

i, j ∈ 1, . . . , N , and the kernel function is given by

K(xi, xj) = ϕ(xi)ϕ(xj). (26)

Kernel function K(xi, xj) is chosen as the radial basis function.The 2ν-SVM is initialized and optimized using iterative decom-position training [53], which leads to reduced complexity.

In the testing phase, fused score ft of a multimodal testpattern xt is defined as follows:

ft = f(xt) = wϕ(xt) + b. (27)

The solution of this equation is the signed distance of xt fromthe separating hyperplane given by 2ν-SVM. Last, an accept orreject decision is made on the test pattern xt using a thresholdX , i.e.,

Result(xt) =

accept, if output of SVM ≥ Xreject, if otherwise.

(28)

Fig. 6 presents the steps involved in the proposed 2ν-SVMlearning algorithm, which fuses the textural and topologicalmatch scores for improved classification.

VI. IRIS IDENTIFICATION USING EULER CODE INDEXING

Iris recognition can be used for verification (1 : 1 match-ing) as well as identification (1 : N matching). Apart fromthe irregularities due to nonideal acquisition, iris identificationsuffers from high system penetration and false accept cases. Foridentification, a probe iris image is matched with all the galleryimages, and the best match is rank 1 match. Due to the poorquality and nonideal acquisition, rank 1 match may not be thecorrect match and leads to false acceptance. The computationaltime for performing iris identification on large databases isanother challenge [31]. For example, identifying an individualfrom a database of 50 million users requires an average of25 million comparisons. On such databases, applying distance-based iris code matching or the proposed SVM fusion will takea significant amount of time. Parallel processing and improvedhardware can reduce the computational time at the expense ofthe operational cost.

Other techniques that can be used to speed up the identifi-cation process are classification and indexing. Yu et al. [19]proposed a coarse iris classification technique using fractalsthat classifies iris images into four categories. The classificationtechnique improves the performance in terms of the com-putational time, but compromises the identification accuracy.Mukherjee [54] proposed an iris indexing algorithm in whichblock-based statistics are used for iris indexing. A single-pixel

Page 9: IEEE TRANSACTIONS ON SYSTEMS, MAN, AND …richa/papers/Iris-SMC.pdf · Improving Iris Recognition Performance Using Segmentation, Quality Enhancement, Match Score Fusion, and Indexing

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

VATSA et al.: IMPROVING IRIS RECOGNITION PERFORMANCE 9

Fig. 7. Iris image divided into four parts. Regions A and B are used in theproposed iris indexing algorithm.

difference histogram used in the indexing algorithm yields goodperformance on a subset of the CASIA version 3.0 database.However, the indexing algorithm is not evaluated for nonidealpoor-quality iris images.

In this paper, we propose a feature-based iris indexing algo-rithm for reducing the computational time that is required foriris identification without compromising the identification ac-curacy. The proposed indexing algorithm is a two-step processwhere the Euler code is first used to generate a small subset ofpossible matches. The 2ν-SVM match score fusion algorithmis then used to find the best matches from the list of possiblematches. The proposed indexing algorithm is divided into twoparts: 1) feature extraction and database enrollment in whichfeatures are extracted from the gallery images and indexedusing the Euler code; and 2) probe image identification in whichfeatures from the probe image are extracted and matched.

A. Feature Extraction and Database Enrollment

Compared to the feature extraction and matching algorithmdescribed in Section IV, we use a slightly different strategyfor feature extraction. For verification, we use common masksfrom gallery and probe images to hide eyelids and eyelashes.However, in indexing, we do not follow the same methodbecause generating a common mask for every set of probeand gallery images will increase the computational cost. Usingthe iris center coordinates, x- and y-axes are drawn, and theiris is divided into four regions. Researchers have shown thatregions A and B in Fig. 7 contain minimum occlusion dueto eyelids and eyelashes and, hence, are the most useful foriris recognition [4], [9], [22]. Therefore, for indexing, we useregions A and B to extract features. The extracted features arestored in the database, and the Euler code is used as the indexingparameter.

B. Probe Image Identification

Similar to the database enrollment process, features are ex-tracted from the probe iris image, and the Euler code is usedto find the possible matches. For matching two iris indexingparameters (Euler codes) E1(i) and E2(i) (i = 1, 2, 3, 4), weapply a thresholding scheme. Indexing parameters are said tobe matched if |E1(i) − E2(i)| ≤ T , where T is the geometrictolerance constant. Indexing score S is computed using

s(i) =

1 if |E1(i) − E2(i)| ≤ T0 otherwise

(29)

S =14

4∑i=1

s(i) (30)

where |s| = 4 is the intermediate score vector that provides thenumber of matched Euler values. We extend this scheme foriris identification by matching the indexing parameter of theprobe image with the gallery images. Let n be the total numberof gallery images, and let Sn represent the indexing scorescorresponding to the n comparisons. The indexing scores Sn

are sorted in a descending order, and the top M match scoresare selected as possible matches.

For every probe image, the Euler code-based indexingscheme yields a small subset of top M matches from thegallery, where M n (for instance, M = 20 and n = 2000).To further improve the identification accuracy, we apply theproposed 2ν-SVM match score fusion. We then use the algo-rithms described in Sections IV and V to match the textural andtopological features of the probe image with top M matchedimages from the gallery and compute the fused match score foreach of the M gallery images. Last, these M fused match scoresare again sorted, and a new ranking is obtained to determine theidentity.

VII. DATABASES AND ALGORITHMS USED FOR

PERFORMANCE EVALUATION AND COMPARISON

In this section, we describe the iris databases and algorithmsthat are used for evaluating the performance of the proposedalgorithms.

A. Databases Used for Validation

To evaluate the performance of the proposed algorithms,we selected three iris databases, namely, ICE 2005 [30], [31],CASIA Version 3 [55], and UBIRIS [26], [27]. These databasesare chosen for validation because the iris images embody theirregularities captured with different instruments and devicecharacteristics under varying conditions. The databases alsocontain iris images from different ethnicity and facilitate a com-prehensive performance evaluation of the proposed algorithms.

• The ICE 2005 database [30], [31] used in recent IrisChallenge Evaluation contains iris images from 244 irisclasses. The total number of images present in the databaseis 2953.

• The CASIA Version 3 database [55] contains 22 051 irisimages pertaining to more than 1600 classes. The imageshave been captured using different imaging setup. The

Page 10: IEEE TRANSACTIONS ON SYSTEMS, MAN, AND …richa/papers/Iris-SMC.pdf · Improving Iris Recognition Performance Using Segmentation, Quality Enhancement, Match Score Fusion, and Indexing

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

10 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS

Fig. 8. Results of the proposed iris segmentation algorithm.

Fig. 9. ROC plot showing the performance of the proposed algorithms on theICE 2005 database [30].

quality of images present in the database also varies fromhigh-quality images with extremely clear iris textural de-tails to images with nonlinear deformation due to vari-ations in visible illumination. Unlike CASIA Version 1,where artificially manipulated images were present,CASIA Version 3 contains original unmasked images.

• The UBIRIS database [26], [27] is composed of 1877images from 241 classes captured in two different ses-sions. The images in the first session are of good quality,whereas the images captured in the second session haveirregularities in reflection, contrast, natural luminosity, andfocus.

Fig. 10. ROC plot showing the performance of the proposed algorithms onthe CASIA Version 3 database [55].

Fig. 11. ROC plot showing the performance of the proposed algorithms onthe UBIRIS iris database [26].

B. Existing Algorithms Used for Validation

To evaluate the effect of the proposed quality enhance-ment algorithm on different feature extraction and match-ing techniques, we implemented Daugman’s integrodifferentialoperator and neural-network-architecture-based 2-D Gabortransform described in [1]–[4]. We also used Masek’s irisrecognition algorithm obtained from [11]. Furthermore, theperformance of the proposed 2ν-SVM fusion algorithm iscompared with the sum rule [33], [34], the min/max rule [33],[34], and the kernel-based fusion rule [35].

VIII. PERFORMANCE EVALUATION AND VALIDATION

FOR IRIS VERIFICATION

In this section, we evaluate the performance of the proposedsegmentation, enhancement, feature extraction, and fusion al-gorithms for iris verification. The performance of the proposedalgorithms is validated using the databases and algorithmsdescribed in Section VII. For validation, we divided the data-bases into three parts—the training data set, the gallery data

Page 11: IEEE TRANSACTIONS ON SYSTEMS, MAN, AND …richa/papers/Iris-SMC.pdf · Improving Iris Recognition Performance Using Segmentation, Quality Enhancement, Match Score Fusion, and Indexing

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

VATSA et al.: IMPROVING IRIS RECOGNITION PERFORMANCE 11

Fig. 12. Sample iris images from the UBIRIS database [26] on which the proposed algorithms fail to perform.

set, and the probe data set. The training data set consists ofmanually labeled one good-quality and one bad-quality imagesper class. This data set is used to train the ν-SVM for qualityenhancement and 2ν-SVM for fusion. After training, the good-quality image in the training data set is used as the gallery dataset, and the remaining images are used as the probe data set.The bad-quality images of the training data set are not used foreither gallery or probe data set.

For iris segmentation, we performed extensive experimentsto compute a common set of curve evolution parameters thatcan be applied to detect the exact boundaries of the iris and thepupil from all the databases. The values of different parametersfor segmentation with narrow-band curve evolution are α =0.2, β = 0.4, λ = 0.4, advection term ν = 0.72, and curvatureterm εk = 0.001. These values provide accurate segmentationresults for all three databases. Fig. 8 shows sample resultsdemonstrating the effectiveness of the proposed iris segmenta-tion algorithm on all the databases with different characteristics.The inner yellow curve represents the pupil boundary, and theouter red curve represents the iris boundary. Fig. 8 also showsthat the proposed segmentation algorithm is not affected byspecular reflections present in the pupil region.

Using the proposed iris segmentation and quality enhance-ment algorithms, we then evaluated the verification perfor-mance with the textural and topological features. The matchscores obtained from the textural and topological features werefused using 2ν-SVM to further evaluate the proposed fusionalgorithm. Figs. 9–11 show the receiver operating characteristic(ROC) plots for iris recognition using the textural featureextraction, topological feature extraction, and 2ν-SVM matchscore fusion algorithms.

Fig. 9 shows the ROC plot for the ICE 2005 database [30],and Fig. 10 shows the results for the CASIA Version 3 database[55]. The ROC plots show that the proposed 2ν-SVM matchscore fusion performs the best followed by the textural- andtopological-feature-based verification. The FRR of individualfeatures is high, but the fusion algorithm significantly reducesit and provides the FRR of 0.74% at 0.0001% FAR on the ICE2005 database and 0.38% on the CASIA Version 3 database.The results on the ICE 2005 database also show that the

TABLE IIIPERFORMANCE COMPARISON OF THE PROPOSED

ALGORITHMS ON THREE IRIS DATABASES

verification performance of the proposed fusion algorithm iscomparable to the three best algorithms in the Iris ChallengeEvaluation 2005 [31].

The same set of experiments is performed using the UBIRISdatabase [26]. The images in this database contain irregular-ities due to motion blur, off angle, gaze direction, diffusion,and other real-world problems that enable us to evaluate therobustness of the proposed algorithms on nonideal iris images.Fig. 11 shows the ROC plot obtained using the UBIRIS data-base. In this experiment, the best performance of 7.35% FRRat 0.0001% FAR is achieved using the 2ν-SVM match scorefusion algorithm. The high rate of false rejection is due to caseswhere the iris is partially visible. Examples of such cases areshown in Fig. 12.

The experimental results on all three databases are summa-rized in Table III. In this table, it can be seen that the proposedfusion algorithm significantly reduces the FRR. However, therejection rate cannot be reduced if a closed eye image or an eyeimage with limited information is present for matching.

We next evaluated the effectiveness of the proposed irisimage quality enhancement algorithm and compared with ex-isting enhancement algorithms, namely, Wiener filtering [25]and background subtraction [12]. Table IV shows the resultsfor the proposed and existing verification algorithms when theoriginal iris image is used and when the quality-enhanced im-ages are used. For the ICE 2005 database, this table shows thatwithout enhancement, the proposed 2ν-SVM fusion algorithmgives 1.99% FRR at 0.0001% FAR. The performance improvesby 1.25% when the proposed iris image quality enhancementalgorithm is used. We also found that the proposed SVMimage quality enhancement algorithm outperforms existing

Page 12: IEEE TRANSACTIONS ON SYSTEMS, MAN, AND …richa/papers/Iris-SMC.pdf · Improving Iris Recognition Performance Using Segmentation, Quality Enhancement, Match Score Fusion, and Indexing

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

12 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS

TABLE IVEFFECT OF THE PROPOSED IRIS IMAGE QUALITY ENHANCEMENT ALGORITHM AND PERFORMANCE COMPARISON OF IRIS RECOGNITION ALGORITHMS

TABLE VCOMPARISON OF EXISTING FUSION ALGORITHMS WITH THE PROPOSED

2ν-SVM FUSION ALGORITHM ON THE ICE 2005 DATABASE

TABLE VIAVERAGE TIME TAKEN FOR THE STEPS INVOLVED IN THE

PROPOSED IRIS RECOGNITION ALGORITHM

enhancement algorithms by at least 0.89%. Similar resultsare obtained for other two iris image databases. The SVMiris image quality enhancement algorithm also improves theperformance of existing iris recognition algorithms. The SVMenhancement algorithm performs better because the SVM lo-cally removes the irregularities, such as blur and noise, andenhances the intensity of the iris image, whereas the Wienerfilter only removes the noise, and the background subtractionalgorithm only highlights the features by improving the imageintensity.

We further compared the performance of the proposed2ν-SVM fusion algorithm with Daugman’s iris detection andrecognition algorithms [1]–[4] and Masek’s implementationof iris recognition [11]. The results in Table IV show thatthe proposed 2ν-SVM fusion yields better performance com-pared to Daugman’s and Masek’s implementation because the2ν-SVM fusion algorithm uses multiple cues extracted fromthe iris image and intelligently fuses the match scores suchthat the false rejection is reduced without increasing the falseacceptance rate. Higher performance of the proposed algorithmis also due to the accurate iris segmentation obtained using themodified Mumford–Shah functional.

Furthermore, the performance of the proposed 2ν-SVM fu-sion algorithm is compared with the sum rule [33], [34], themin/max rule [33], [34], and the kernel-based fusion algorithms[35]. The performance of the proposed and existing fusionalgorithms is evaluated on the ICE 2005 database by fusingthe match scores obtained from the textural and topologicalfeatures. Table V shows that the proposed 2ν-SVM fusionalgorithm performs best with 0.74% FRR at 0.0001% FAR,which is 0.74% better than the kernel-based fusion algorithm[35] and 0.83% better than the sum rule [33]. These results,

Fig. 13. CMC plot showing the identification accuracy obtained by theproposed indexing algorithm.

thus, show that the proposed fusion algorithm effectively fusesthe textural and topological features of the iris image, en-hances the recognition performance, and considerably reducesthe FRR. The average time for matching two iris images is1.56 s on a Pentium IV 3.2-GHz processor with 1-GB RAMunder the C programming environment. Table VI shows thebreakdown of computational complexity in terms of the averageexecution time for iris segmentation, enhancement, featureextraction and matching, and 2ν-SVM fusion.

IX. PERFORMANCE EVALUATION AND VALIDATION

FOR IRIS IDENTIFICATION

In this section, we present the performance of the proposedindexing algorithm for iris identification. Similar to verifica-tion, we use segmentation, enhancement, feature extraction,and fusion algorithms described in Sections II–V. To validatethe performance of the proposed iris indexing algorithm, wecombine the three iris databases and generate a nonhomo-geneous database with 2085 classes and 26 881 images. Theexperimental setup (the training data set, the gallery data set,the probe data set, and segmentation parameters) is similar tothe setup used for iris verification.

Using the training data set, we found the value of geomet-ric tolerance constant T = 20. Fig. 13 shows the cumulativematching characteristic (CMC) plots for the proposed indexingalgorithm with and without the 2ν-SVM match score fusion.The plots show that rank 1 identification accuracy of 92.39%is achieved when the indexing algorithm is used without match

Page 13: IEEE TRANSACTIONS ON SYSTEMS, MAN, AND …richa/papers/Iris-SMC.pdf · Improving Iris Recognition Performance Using Segmentation, Quality Enhancement, Match Score Fusion, and Indexing

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

VATSA et al.: IMPROVING IRIS RECOGNITION PERFORMANCE 13

TABLE VIIIRIS IDENTIFICATION PERFORMANCE WITH AND WITHOUT THE PROPOSED IRIS INDEXING ALGORITHM. ACCURACY IS

REPORTED FOR RANK 1 IDENTIFICATION USING A DATABASE OF 2085 CLASSES WITH 26 881 IRIS IMAGES

score fusion. The accuracy improves to 97.21% with the useof 2ν-SVM match score fusion. Incorporating textural featuresand match score fusion, thus, reduces the FAR and provides animprovement of around 5% in the rank 1 identification accu-racy. We also observed that on the nonhomogeneous database,100% accuracy could not be achieved because the databasecontains occluded images with very limited information similarto those shown in Fig. 12.

We next compared the identification performance of Daug-man’s iris code algorithm and the proposed 2ν-SVM matchscore fusion without indexing. Daugman’s algorithm is usedas a baseline for comparison. Daugman’s algorithm yields theidentification accuracy of 95.89%, and the average time re-quired for identifying an individual is 5.58 s. On the other hand,the identification accuracy of the proposed 2ν-SVM matchscore fusion algorithm without indexing is 97.21%. However,the average time for identifying an individual is 221.14 s,which is considerably higher than Daugman’s algorithm. Toreduce the time taken for identification, the proposed indexingalgorithm described in Section VI is used. Indexing is achievedby using the Euler code, which is computed from the localtopological features of the iris image. The indexing algorithmidentifies a small subset of the most likely candidates that willyield a match. Specifically, we analyze three scenarios. Case 1determines a match based on the local topological features.Case 2 is an extension that uses the subset of images identifiedwith the local features. However, the matching is based onthe global textural features. Case 3 is a further extension thatfuses the match scores obtained from the local and global fea-tures to perform identification. The identification performanceis determined by experimentally computing the accuracy andthe time taken for identification. The results are summarizedin Table VII for all three cases when indexing is used withthe proposed recognition algorithm. In all three scenarios, theproposed algorithm considerably decreases the identificationtime, thereby making it suitable for real-time applications andthe use with large databases.

In case 1, since only the local Euler feature is used forindexing, the identification time is the fastest (0.043 s); how-ever, the accuracy is lower compared to Daugman’s algorithm.The accuracy improved when the global and local features are

sequentially used. Furthermore, as shown in Table VII, case 3yields the best performance in terms of accuracy (97.21%) withan average identification time of less than 2 s.

X. CONCLUSION

In this paper, we address the challenge of improving theperformance of iris verification and identification. This paperpresents accurate nonideal iris segmentation using the modifiedMumford–Shah functional. Depending on the type of abnor-malities likely to be encountered during the image capture, aset of global image enhancement algorithms is concurrentlyapplied to the iris image. Although this enhances the low-quality regions, it also adds undesirable artifacts in the orig-inal high-quality regions of the iris image. Enhancing onlyselected regions of the image is extremely difficult and notpragmatic. This paper describes a novel learning algorithmthat selects enhanced regions from each globally enhancedimage and synergistically combines to form a single compositehigh-quality iris image. Furthermore, we extract global texturaland local topological features from the iris image. The corre-sponding match scores are fused using the proposed 2ν-SVMmatch score fusion algorithm to further improve the perfor-mance. Iris recognition algorithms require a significant amountof time to perform identification. We have proposed an irisindexing algorithm using local and global features to reducethe identification time without compromising the identificationaccuracy. The performance is evaluated using three nonhomo-geneous databases with varying characteristics. The proposedalgorithms are also compared with existing algorithms. It isshown that the cumulative effect of accurate segmentation,high-quality iris enhancement, and intelligent fusion of matchscores obtained using global and local features reduces the FRRfor verification. Moreover, the proposed indexing algorithmsignificantly reduces the computational time without affectingthe identification accuracy.

ACKNOWLEDGMENT

The authors would like to thank Dr. P. Flynn, CASIA(China), and U.B.I. (Portugal) for providing the iris databases

Page 14: IEEE TRANSACTIONS ON SYSTEMS, MAN, AND …richa/papers/Iris-SMC.pdf · Improving Iris Recognition Performance Using Segmentation, Quality Enhancement, Match Score Fusion, and Indexing

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

14 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS

used in this paper. The authors would also like to thank thereviewers and editors for providing constructive and helpfulcomments.

REFERENCES

[1] J. G. Daugman, “High confidence visual recognition of persons by a test ofstatistical independence,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 15,no. 11, pp. 1148–1161, Nov. 1993.

[2] J. G. Daugman, “The importance of being random: Statistical princi-ples of iris recognition,” Pattern Recognit., vol. 36, no. 2, pp. 279–291,Feb. 2003.

[3] J. G. Daugman, “Uncertainty relation for resolution in space, spatialfrequency, and orientation optimized by two-dimensional visual corticalfilters,” J. Opt. Soc. Amer. A, Opt. Image Sci., vol. 2, no. 7, pp. 1160–1169, Jul. 1985.

[4] J. G. Daugman, “Biometric personal identification system based on irisanalysis,” U.S. Patent Number 5 291 560, Mar. 1, 1994.

[5] R. P. Wildes, “Iris recognition: An emerging biometric technology,” Proc.IEEE, vol. 85, no. 9, pp. 1348–1363, Sep. 1997.

[6] W. W. Boles and B. Boashash, “A human identification technique usingimages of the iris and wavelet transform,” IEEE Trans. Signal Process.,vol. 46, no. 4, pp. 1185–1188, Apr. 1998.

[7] Y. Zhu, T. Tan, and Y. Wang, “Biometric personal identificationbased on iris patterns,” in Proc. IEEE Int. Conf. Pattern Recog., 2000,pp. 2801–2804.

[8] C. L. Tisse, L. Martin, L. Torres, and M. Robert, “Iris recognition systemfor person identification,” in Proc. 2nd Int. Workshop Pattern Recog. Inf.Syst., 2002, pp. 186–199.

[9] C. L. Tisse, L. Torres, and R. Michel, “Person identification techniqueusing human iris recognition,” in Proc. 15th Int. Conf. Vis. Interface, 2002,pp. 294–299.

[10] W.-S. Chen and S.-Y. Yuan, “A novel personal biometric authenticationtechnique using human iris based on fractal dimension features,” in Proc.Int. Conf. Acoust., Speech, Signal Process., 2003, vol. 3, pp. 201–204.

[11] L. Masek and P. Kovesi, MATLAB Source Code for a BiometricIdentification System Based on Iris Patterns. Perth, Australia:School Comput. Sci. Softw. Eng., Univ. Western Australia, 2003.[Online]. Available: http://www.csse.uwa.edu.au/pk/studentprojects/libor/sourcecode.html

[12] L. Ma, T. Tan, Y. Wang, and D. Zhang, “Personal identification based oniris texture analysis,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 25,no. 12, pp. 1519–1533, Dec. 2003.

[13] L. Ma, T. Tan, Y. Wang, and D. Zhang, “Efficient iris recognition bycharacterizing key local variations,” IEEE Trans. Image Process., vol. 13,no. 6, pp. 739–750, Jun. 2004.

[14] B. R. Meena, M. Vatsa, R. Singh, and P. Gupta, “Iris based human veri-fication algorithms,” in Proc. Int. Conf. Biometric Authentication, 2004,pp. 458–466.

[15] M. Vatsa, R. Singh, and P. Gupta, “Comparison of iris recognition algo-rithms,” in Proc. Int. Conf. Intell. Sens. Inf. Process., 2004, pp. 354–358.

[16] C. Sanchez-Avila and R. Sanchez-Reillo, “Two different approaches foriris recognition using Gabor filters and multiscale zero-crossing represen-tation,” Pattern Recognit., vol. 38, no. 2, pp. 231–240, Feb. 2005.

[17] M. Vatsa, “Reducing false rejection rate in iris recognition by qualityenhancement and information fusion,” M.S. thesis, West Virginia Univ.,Morgantown, WV, 2005.

[18] M. Vatsa, R. Singh, and A. Noore, “Reducing the false rejection rate ofiris recognition using textural and topological features,” Int. J. SignalProcess., vol. 2, no. 1, pp. 66–72, 2005.

[19] L. Yu, D. Zhang, K. Wang, and W. Yang, “Coarse iris classification usingbox-counting to estimate fractal dimensions,” Pattern Recognit., vol. 38,no. 11, pp. 1791–1798, Nov. 2005.

[20] B. Ganeshan, D. Theckedath, R. Young, and C. Chatwin, “Biometric irisrecognition system using a fast and robust iris localization and alignmentprocedure,” Opt. Lasers Eng., vol. 44, no. 1, pp. 1–24, Jan. 2006.

[21] N. D. Kalka, J. Zuo, V. Dorairaj, N. A. Schmid, and B. Cukic, “Imagequality assessment for iris biometric,” in Proc. SPIE Conf.—BiometricTechnology for Human Identification III, 2006, vol. 6202, pp. 6 102 0D1–62 020 D11.

[22] H. Proenca and L.A. Alexandre, “Toward noncooperative iris recognition:A classification approach using multiple signatures,” IEEE Trans. PatternAnal. Mach. Intell., vol. 29, no. 4, pp. 607–612, Apr. 2007.

[23] J. Thornton, M. Savvides, and B. V. K. Vijaya Kumar, “A Bayesian ap-proach to deformed pattern matching of iris images,” IEEE Trans. PatternAnal. Mach. Intell., vol. 29, no. 4, pp. 596–606, Apr. 2007.

[24] D. M. Monro, S. Rakshit, and D. Zhang, “DCT-based iris recognition,”IEEE Trans. Pattern Anal. Mach. Intell., vol. 29, no. 4, pp. 586–596,Apr. 2007.

[25] A. Poursaberi and B. N. Araabi, “Iris recognition for partially oc-cluded images: Methodology and sensitivity analysis,” EURASIP J.Adv. Signal Process., vol. 2007, no. 1, p. 20, Jan. 2007. ArticleID 36751.

[26] H. Proenca and L. A. Alexandre, “UBIRIS: A noisy iris image database,”in Proc. 13th Int. Conf. Image Anal. Process., 2005, vol. 1, pp. 970–977.

[27] [Online]. http://iris.di.ubi.pt/[28] K. W. Bowyer, K. Hollingsworth, and P. J. Flynn, “Image understand-

ing for iris biometrics: A survey,” Comput. Vis. Image Underst., 2008.DOI:10.1016/j.cviu.2007.08.005, to be published.

[29] R. Singh, M. Vatsa, and A. Noore, “Improving verification accu-racy by synthesis of locally enhanced biometric images and de-formable model,” Signal Process., vol. 87, no. 11, pp. 2746–2764,Nov. 2007.

[30] X. Liu, K. W. Bowyer, and P. J. Flynn, “Experiments with an improvediris segmentation algorithm,” in Proc. 4th IEEE Workshop Autom. Identi-fication Adv. Technol., 2005, pp. 118–123.

[31] [Online]. http://iris.nist.gov/ice/ICE Home.htm[32] J. Daugman, “New methods in iris recognition,” IEEE Trans. Syst., Man,

Cybern. B, Cybern., vol. 37, no. 5, pp. 1168–1176, Oct. 2007.[33] J. Kittler, M. Hatef, R. P. Duin, and J. G. Matas, “On combining classi-

fiers,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 20, no. 3, pp. 226–239,Mar. 1998.

[34] A. Ross and A. K. Jain, “Information fusion in biometrics,” PatternRecognit. Lett., vol. 24, no. 13, pp. 2115–2125, Sep. 2003.

[35] J. F. Aguilar, J. O. Garcia, J. G. Rodriguez, and J. Bigun, “Kernel-based multimodal biometric verification using quality signals,” in Proc.SPIE—Biometric Technology for Human Identification, 2004, vol. 5404,p. 544.

[36] B. Duc, G. Maitre, S. Fischer, and J. Bigun, “Person authentication byfusing face and speech information,” in Proc. 1st Int. Conf. Audio VideoBased Biometric Person Authentication, 1997, pp. 311–318.

[37] A. Ross and S. Shah, “Segmenting non-ideal irises using geodesic activecontours,” in Proc. Biometric Consortium Conf., 2006, pp. 1–6.

[38] E. M. Arvacheh and H. R. Tizhoosh, “Iris segmentation: Detecting pupil,limbus and eyelids,” in Proc. IEEE Int. Conf. Image Process., 2006,pp. 2453–2456.

[39] X. Liu, “Optimizations in Iris Recognition,” Ph.D. dissertation, Univ.Notre Dame, Notre Dame, IN, 2006.

[40] A. Tsai, A. Yezzi, Jr., and A. Willsky, “Curve evolution implementation ofthe Mumford–Shah functional for image segmentation, denoising, inter-polation, and magnification,” IEEE Trans. Image Process., vol. 10, no. 8,pp. 1169–1186, Aug. 2001.

[41] T. Chan and L. Vese, “Active contours without edges,” IEEE Trans. ImageProcess., vol. 10, no. 2, pp. 266–277, Feb. 2001.

[42] R. Malladi, J. Sethian, and B. Vemuri, “Shape modeling with front prop-agation: A level set approach,” IEEE Trans. Pattern Anal. Mach. Intell.,vol. 17, no. 2, pp. 158–175, Feb. 1995.

[43] J. Z. Huang, L. Ma, T. N. Tan, and Y. H. Wang, “Learning-based enhance-ment model of iris,” in Proc. Brit. Mach. Vis. Conf., 2003, pp. 153–162.

[44] V. N. Vapnik, The Nature of Statistical Learning Theory, 2nd ed. NewYork: Springer-Verlag, 1999.

[45] P.-H. Chen, C.-J. Lin, and B. Schlkopf, “A tutorial on ν-support vectormachines,” Appl. Stoch. Models Bus. Ind., vol. 21, no. 2, pp. 111–136,Mar./Apr. 2005.

[46] C. C. Chang, and C. J. Lin, LIBSVM: A Library for Support Vec-tor Machines, 2000. [Online]. Available: http://www.csie.ntu.edu.tw/_cjlin/libsvm

[47] R. Singh, M. Vatsa, and A. Noore, “SVM based adaptive biometric imageenhancement using quality assessment,” in Speech, Audio, Image andBiomedical Signal Processing Using Neural Networks, B. Prasad andS. R. M. Prasanna, Eds. New York: Springer-Verlag, 2008, ch. 16,pp. 351–372.

[48] D. J. Field, “Relations between the statistics of natural images and theresponse properties of cortical cells,” J. Opt. Soc. Amer. A, Opt. ImageSci., vol. 4, no. 12, pp. 2379–2394, Dec. 1987.

[49] A. Bishnu, B. B. Bhattacharya, M. K. Kundu, C. A. Murthy, andT. Acharya, “Euler vector for search and retrieval of gray-tone images,”IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 35, no. 4, pp. 801–812,Aug. 2005.

[50] C. Palm and T. M. Lehmann, “Classification of color textures by Gaborfiltering,” Mach. Graph. Vis., vol. 11, no. 2/3, pp. 195–219, 2002.

[51] D. J. Field, “What is the goal of sensory coding?” Neural Comput., vol. 6,no. 4, pp. 559–601, Jul. 1994.

Page 15: IEEE TRANSACTIONS ON SYSTEMS, MAN, AND …richa/papers/Iris-SMC.pdf · Improving Iris Recognition Performance Using Segmentation, Quality Enhancement, Match Score Fusion, and Indexing

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

VATSA et al.: IMPROVING IRIS RECOGNITION PERFORMANCE 15

[52] Y. Wang, T. Tan, and A. K. Jain, “Combining face and iris biometrics foridentity verification,” in Proc. 4th Int. Conf. Audio Video Based BiometricPerson Authentication, 2003, pp. 805–813.

[53] H. G. Chew, C. C. Lim, and R. E. Bogner, “An implementation oftraining dual-nu support vector machines,” in Optimization and Controlwith Applications, L. Qi, K. L. Teo, and X. Yang, Eds. Norwell, MA:Kluwer, 2005.

[54] R. Mukherjee, “Indexing techniques for fingerprint and iris databases,”M.S. thesis, West Virginia Univ., Morgantown, WV, 2007.

[55] [Online].Available:http://www.cbsr.ia.ac.cn/IrisDatabase/irisdatabase.php

Mayank Vatsa (S’04) received the M.S. degree incomputer science in 2005 and is currently workingtoward the Ph.D. degree in computer science at WestVirginia University, Morgantown.

He was actively involved in the development of amultimodal biometric system, which includes face,fingerprint, signature, and iris recognition at IndianInstitute of Technology, Kanpur, India, from July2002 to July 2004. He has more than 65 publicationsin refereed journals, book chapters, and conferences.His current areas of interest include pattern recogni-

tion, image processing, uncertainty principles, biometrics, watermarking, andinformation fusion.

Mr. Vatsa is a member of the IEEE Computer Society and ACM. He is also amember of the Phi Kappa Phi, Tau Beta Pi, Sigma Xi, Upsilon Pi Epsilon, andEta Kappa Nu honor societies. He was the recipient of four best paper awards.

Richa Singh (S’04) received the M.S. degree incomputer science in 2005 and is currently workingtoward the Ph.D. degree in computer science at WestVirginia University, Morgantown.

She had been actively involved in the developmentof a multimodal biometric system, which includesface, fingerprint, signature, and iris recognition atthe Indian Institute of Technology, Kanpur, fromJuly 2002 to July 2004. Her current areas of interestinclude pattern recognition, image processing, ma-chine learning, granular computing, biometrics, and

data fusion. She has more than 65 publications in refereed journals, bookchapters, and conferences.

Ms. Singh is a member of the IEEE Computer Society and ACM. She is alsoa member of the Phi Kappa Phi, Tau Beta Pi, Upsilon Pi Epsilon, and Eta KappaNu honor societies. She was the recipient of four best paper awards.

Afzel Noore (M’03) received the Ph.D. degree inelectrical engineering from West Virginia University,Morgantown.

He was a Digital Design Engineer with PhilipsIndia. From 1996 to 2003, he was the Associate Deanfor Academic Affairs and Special Assistant to theDean with the College of Engineering and MineralResources, West Virginia University, where he iscurrently a Professor with the Lane Department ofComputer Science and Electrical Engineering. Hisresearch has been funded by NASA, NSF, Westing-

house, GE, the Electric Power Research Institute, the U.S. Department ofEnergy, and the U.S. Department of Justice. He serves on the editorial boards ofRecent Patents on Engineering and Open Nanoscience Journal. He has over 90publications in refereed journals, book chapters, and conferences. His researchinterests include computational intelligence, biometrics, software reliabilitymodeling, machine learning, hardware description languages, and quantumcomputing.

Dr. Noore is a member of Phi Kappa Phi, Sigma Xi, Eta Kappa Nu, and TauBeta Pi honor societies. He was the recipient of four best paper awards.