4
COMPREHENSIVE ASSESSMENT OF IRIS IMAGE QUALITY Xingguang Li 1,2 ,Zhenan Sun 2 , Tieniu T an 2 1. Department of Automation, University of Science and Technology of China 2. National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, P.O. Box 2728, Beijing, P.R. China, 100190 {lxguang}@mail.ustc.edu.cn,{znsun,tnt}@nlpr.ia.ac.cn ABSTRACT Iris image quality critically determines iris recognition performance and the quality metrics of iris images are also useful prior informa- tion for adaptive selection of optimal recognition strategy. Iris image quality is jointly determined by multiple factors such as focus, oc- clusion, off-angle, deformation, etc. So it is a complex problem to assess the overall quality score of an iris image. This paper proposes a novel framework for comprehensive assessment of iris image qual- ity. The contributions of the paper include three aspects: (i) Three novel approaches are proposed to estimate the quality metrics (QM) of defocus, motion blur and off-angle in an iris image respectively, (ii) A fusion method based on likelihood ratio is proposed to com- bine six quality factors of an iris image into an unified quality score. (iii) A statistical quantization method based on t-test is proposed to adaptively classify the iris images in a database into a number of quality levels. Extensive experiments demonstrate the proposed framework can effectively assess the overall quality of iris images. And the relationship between iris recognition results and the quality level of iris images can be explicitly formulated. Index TermsIris recognition, image quality assessment, de- focus, motion blur, off-angle 1. INTRODUCTION Iris recognition as a reliable personal identification method has im- portant applications in our society. However, usability is the largest bottleneck of iris recognition and it is very difficult to capture high- quality iris images due to the limited effective imaging volume of iris sensors. So it is inevitable that the image sequences acquired by iris cameras have wide distribution of quality measures. Iris image quality is closely related to the result of iris recognition so we need to select high-quality iris image samples from the input sequences to improve iris recognition performance. With fast development of iris recognition algorithms, some poor quality iris images can also be processed using adaptive preprocessing and recognition algorithms. For example, if we know the off-angle value of an iris image in ad- vance, we can correct it before recognition. So iris image quality assessment is a critical module in iris recognition systems. The image quality factors that affect iris recognition perfor- mance mainly include defocus, motion blur, off-angle, occlusion, deformation and light variation as depicted in Fig.1. Iris image quality assessment has attracted the attention of researchers and there have been a number of methods [1, 2, 3] proposed to address individual quality metric or quality measure (abbreviated as QM). However, comprehensive assessment of iris image quality is still a complex and challenging problem because the overall iris image (a) Defocus (b) Motion Blur (c) Off-Angle (d) Occlusion (e) Deformation (f) Light variation Fig. 1. Examples of quality metrics. The overlaying circles are the localization results quality score is jointly determined by multiple factors and there is not a well-accepted criteria or standard to define the weight of each factor in iris image quality. A straightforward approach is to fuse the QMs into a quality score (QS). There are several methods on QS fusion in literature. Belcher et al. [4] used different normalization functions for each QM and fused them with multiplication. Nathan et al. [3] proposed to use the Dempster-Shafer theory to combine all the QMs to a QS. Zuo et al. [5] adopted a multivariate adaptive mapping using feed forward neural network with two hidden layers for fusion of the QMs. Since the fused QS is continuous and the adjacent values of QS are not distinct, it is necessary to quantify QS into several quality levels (QL) to improve the statistical significance of iris image qual- ity values. However, how to decide the number of QLs is a fre- quently encountered problem in quality evaluation. In [6], Grother et al. demonstrated the quantized quality levels (QL) was more ap- propriate than the continuous QS practically. And they measured the significant difference of genuine distributions between adjacent QLs to determine the number of QLs, but the used Kolmogorov Smirnov (KS) test is too complicated in practice. In this paper, we propose a comprehensive scheme to assess the quality scores of iris images. Firstly, we present three novel methods to measure defocus, motion blur and off-angle accurately. Then we employ the improved likelihood ratio-based fusion method [7] in the QM fusion stage to get the quality score. Finally, the t-test is applied to decide the number of quality levels. The remainder of this paper is organized as follows. Section 2 describes the details of the proposed method. Section 3 provides the experiments and analysis. Section 4 summarizes the paper. 2011 18th IEEE International Conference on Image Processing 978-1-4577-1302-6/11/$26.00 ©2011 IEEE 3178

Comprehensive Assessment of Iris Image Quality

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Comprehensive Assessment of Iris Image Quality

COMPREHENSIVE ASSESSMENT OF IRIS IMAGE QUALITY

Xingguang Li 1,2,Zhenan Sun 2, Tieniu Tan 2

1. Department of Automation, University of Science and Technology of China2. National Laboratory of Pattern Recognition, Institute of Automation,

Chinese Academy of Sciences, P.O. Box 2728, Beijing, P.R. China, 100190{lxguang}@mail.ustc.edu.cn,{znsun,tnt}@nlpr.ia.ac.cn

ABSTRACT

Iris image quality critically determines iris recognition performanceand the quality metrics of iris images are also useful prior informa-tion for adaptive selection of optimal recognition strategy. Iris imagequality is jointly determined by multiple factors such as focus, oc-clusion, off-angle, deformation, etc. So it is a complex problem toassess the overall quality score of an iris image. This paper proposesa novel framework for comprehensive assessment of iris image qual-ity. The contributions of the paper include three aspects: (i) Threenovel approaches are proposed to estimate the quality metrics (QM)of defocus, motion blur and off-angle in an iris image respectively,(ii) A fusion method based on likelihood ratio is proposed to com-bine six quality factors of an iris image into an unified quality score.(iii) A statistical quantization method based on t-test is proposedto adaptively classify the iris images in a database into a numberof quality levels. Extensive experiments demonstrate the proposedframework can effectively assess the overall quality of iris images.And the relationship between iris recognition results and the qualitylevel of iris images can be explicitly formulated.

Index Terms— Iris recognition, image quality assessment, de-focus, motion blur, off-angle

1. INTRODUCTION

Iris recognition as a reliable personal identification method has im-portant applications in our society. However, usability is the largestbottleneck of iris recognition and it is very difficult to capture high-quality iris images due to the limited effective imaging volume ofiris sensors. So it is inevitable that the image sequences acquired byiris cameras have wide distribution of quality measures. Iris imagequality is closely related to the result of iris recognition so we needto select high-quality iris image samples from the input sequences toimprove iris recognition performance. With fast development of irisrecognition algorithms, some poor quality iris images can also beprocessed using adaptive preprocessing and recognition algorithms.For example, if we know the off-angle value of an iris image in ad-vance, we can correct it before recognition. So iris image qualityassessment is a critical module in iris recognition systems.

The image quality factors that affect iris recognition perfor-mance mainly include defocus, motion blur, off-angle, occlusion,deformation and light variation as depicted in Fig.1. Iris imagequality assessment has attracted the attention of researchers andthere have been a number of methods [1, 2, 3] proposed to addressindividual quality metric or quality measure (abbreviated as QM).However, comprehensive assessment of iris image quality is stilla complex and challenging problem because the overall iris image

(a) Defocus (b) Motion Blur (c) Off-Angle

(d) Occlusion (e) Deformation (f) Light variation

Fig. 1. Examples of quality metrics. The overlaying circles are thelocalization results

quality score is jointly determined by multiple factors and there isnot a well-accepted criteria or standard to define the weight of eachfactor in iris image quality. A straightforward approach is to fusethe QMs into a quality score (QS). There are several methods on QSfusion in literature. Belcher et al. [4] used different normalizationfunctions for each QM and fused them with multiplication. Nathanet al. [3] proposed to use the Dempster-Shafer theory to combineall the QMs to a QS. Zuo et al. [5] adopted a multivariate adaptivemapping using feed forward neural network with two hidden layersfor fusion of the QMs.

Since the fused QS is continuous and the adjacent values of QSare not distinct, it is necessary to quantify QS into several qualitylevels (QL) to improve the statistical significance of iris image qual-ity values. However, how to decide the number of QLs is a fre-quently encountered problem in quality evaluation. In [6], Grotheret al. demonstrated the quantized quality levels (QL) was more ap-propriate than the continuous QS practically. And they measured thesignificant difference of genuine distributions between adjacent QLsto determine the number of QLs, but the used Kolmogorov Smirnov(KS) test is too complicated in practice.

In this paper, we propose a comprehensive scheme to assess thequality scores of iris images. Firstly, we present three novel methodsto measure defocus, motion blur and off-angle accurately. Then weemploy the improved likelihood ratio-based fusion method [7] in theQM fusion stage to get the quality score. Finally, the t-test is appliedto decide the number of quality levels.

The remainder of this paper is organized as follows. Section 2describes the details of the proposed method. Section 3 provides theexperiments and analysis. Section 4 summarizes the paper.

2011 18th IEEE International Conference on Image Processing

978-1-4577-1302-6/11/$26.00 ©2011 IEEE 3178

Page 2: Comprehensive Assessment of Iris Image Quality

2. IRIS IMAGE QUALITY

The framework for comprehensive assessment of iris image qualityis illustrated in Fig.2.

Fig. 2. The flowchart of our proposed method

2.1. Individual quality metric assessment

2.1.1. Defocus

The focus value of an iris image is commonly estimated by the out-put of high pass filtering result with the entire image [1, 2] or a fixedregion of interest (ROI) [3]. However, the high-frequency informa-tion on non-iris image regions can not represent the focus value ofiris texture. So the key point is how to select appropriate image re-gions for focus estimation.

We propose a method for patch selection to attenuate the effectof the eyelid, eyelash and spot light. Firstly, two original patchesare chosen (see Fig. 3 (a)). Then, we predefine two threshold val-ues, threshhigh and threshlow. If the gray value of a pixel in twopatches is in the range [0, threshlow] or [threshhigh, 255], it is re-garded as the defective pixel. The numbers of defective pixels of twopatches are denoted by {DPi|i = 1, 2} (1,2 stand for the left andright patch respectively). If both DPis are equal to zero, two patchesare chosen for defocus assessment as shown in Fig. 3(a), otherwise,the patch with smaller DPi is chosen as shown in Fig. 3(b). Ourstrategy not only reduces the effect of the eyelid, eyelash and spotlight but also is robust to the asymmetry of eyelid.

(a) (b) (c)

Fig. 3. ROI selection for defocus assessment. (a) No effect of eye-lid,(b) Asymmetry by left tilt of eyelid,(c) The (5×5) convolutionkernel

Then, the (5×5) convolution kernel which is proposed by Weiin [2] is adopted to measure the defocus degree in the chosen ROI.

2.1.2. Motion blur

Because motion blur can compress the frequency band of perpen-dicular orientation of motion, directional filter is applied in the fre-quency domain to measure the degree of motion blur in [3]. Howeverthe algorithm relies on tuning the parameters of the directional filter

group: the scale of directional filter and the angle interval. Thus, wepropose an alternative method based on Radon transform to improvethe motion blur estimation.

Figure 4 displays the process of motion blur assessment mod-ule. The original iris image is converted into the frequency domainby FFT as shown in Fig. 4 (b). Then we apply the Radon trans-form in the overall filtered frequency domain to estimate the motionorientation as follows:

Θ̂ = arg maxθ∈[0:180]

∥Radon(F (I))∥ (1)

where F (I) is the fourier transform of input iris image, Θ̂ is the mo-tion orientation. Radon stands for the overall image Radon trans-form (see Fig. 4 (c)). We do not need to allocate the scale and angleinterval. The obtained orientation of motion is defined as the prin-cipal orientation. The response of Radon transform in the principalorientation is denoted by RH(y) (see Fig. 4 (d)), and it is smoothedby a Gaussian filter to remove the outliers. Given the RH(y)∗G(σ),the definition ”slice” which is defined in [3] is introduced to measurethe QM of motion blur.

Fig. 4. Procedure of motion blur assessment. (a) The motion blurrediris image, (b) FFT of the iris image, (c) Radon transform, (d) Theresponse of Radon transform in principle direction

2.1.3. Off-angle

Usually, the pupil is modeled as an ellipse, and the circularity ofthe pupil is measured to define the degree of off-angle [3, 8]. How-ever, the pupil of iris images which is on-angle may not be circular.Figure 5 displays the negative examples against [3, 8]. The numberbelow the figure is the ratio between the major axis and the minoraxis of an ellipse. The smaller the number is, the higher the degreeof off-angle is. Assuming that there are one or more light sourcesin the imaging acquisition system, the direction of light sources isapproximate to the optical axis and we define the reflection from thelight socure as the main spots. Based on the geometrical relationshipbetween the main spots and iris center, we can discard the extremeoff-angle iris images. In the following, we take the iris image withtwo main spots for example. If there is only one main spot in theimaging system (for example in ICE 1.0), we define the distance be-tween the main spot and the iris center as the QM of off-angle.

The scalable spot operator and the threshold method are asso-ciated to detect the main spots. Firstly, an original iris image istransformed into the binary image based on the adaptive thresholdmethod and get the spot candidates, in which oversize light spots arediscarded. Secondly, a scalable square template is constructed as ahigh pass filter. The scale of the scalable square template is twiceof the size of the corresponding spot. Finally, the responses of allspot candidates filtered by their corresponding square templates areranked in ascending order, and the top two spots are used as the mainspots. The maximum distance between the main spots and the iriscenter is defined as the QM of off-angle. If the QM is larger than theradius of the pupil, the iris image is regarded as off-angle.

2011 18th IEEE International Conference on Image Processing

3179

Page 3: Comprehensive Assessment of Iris Image Quality

(a) 0.92 (b) 0.95

Fig. 5. The major axis of ellipse is marked and the number belowfigure is the ratio between minor axis and major axis.

2.1.4. Other quality metrics

The method proposed in [9] is used to segment the iris image. Basedon the results of localization and segmentation, the QMs of occlusionand dilation are calculated based on the methods in [3]. The meanvalue in the valid iris region is used to measure the degree of lightvariation.

2.2. Quality score fusion

A unified quality score is desirable for predicting the performanceof iris system. Due to the diversity of the QMs of the iris image,we encounter two problems in the fusion stage. The first one is thatit is hard to label the training samples into many QLs; the secondone is that any of the QMs should directly affect the fused QS. Thusthe fusion methods based on sum rule or weighted method are notsuitable to deal with the problems because they would attenuate theeffect of partial QMs with small value.

To solve the above problems, we propose a new fusion algo-rithm based on likelihood ratio fusion algorithm adopted in [7]. Inthe fusion stage, we substitute the probability density with probabil-ity distribution since the probability distribution is monotone alongwith the quality degree and robust to outliers. The QMs of defocus,motion blur, off-angle, occlusion, dilation and mean value in validiris region mentioned above are combined together. The fusion QSis defined as follows:

QS(u) =

n∏i=1

F (ui | H1)

F (ui | H0)(2)

where F (ui) is the i-th probability distribution of QMs, and n is thenumber of QMs. H1 and H0 represent the positive and negative sam-ples respectively. There are two advantages of the fusion algorithm.First, we only need to label the iris images into two classes based oneach QM respectively, for example, focus versus defocus. Second,the product rule is better than the weighted sum rule because it doesnot attenuate the effect of QMs.

2.3. Quality levels determination

It is accepted that images with different quality contribute to the per-formance of the system differently, therefore a good quality assess-ment should predict the performance precisely. However, it is diffi-cult to explicitly formulate the relationship of these two. A robustalternative is to quantify QS into several levels, among which ad-jacent levels have significantly different performance [6]. Inspiredby [6], in terms of the genuine distribution, we use t-test [10], onekind of statistical hypothesis test, to automatically divide iris images

into distinct groups with significant different performance. T -testassesses whether the means of two groups are statistically differentfrom each other when the test statistic conforms to a normal dis-tribution. Hence, it is the best choice because the distribution ofhamming distance used for iris recognition is a Bernoulli distribu-tion [11] which can be approximated by a normal distribution. Inexperiments, p-value is calculated based on the t-test. If the p-value between adjacent QLs is larger than the 0.005, it indicatesthese two QLs are not significantly different and also demonstratesthe division of the QLs is too fine.

3. EXPERIMENTAL RESULTS

3.1. Database

For the purpose of validating the effectiveness of the individualQM assessment, the database (abbreviated as DB1) is constructed,which contains 119 clear images, 112 defocused images, 102 motionblurred images and 52 off-angle images.

After that, we constructed a database (abbreviated as DB2)which contains 1000 iris images from 20 subjects, and there are50 images per subject. DB2 is a subset from original iris imagessequences including both good and poor quality iris images, and thequality range in DB2 is larger because there are more poor qual-ity iris images in original image sequences. DB2 and ICE 1.0 areused to testify the robustness of our proposed method in differentdatabases. The images of ICE-Left and 500 iris images of 10 sub-jects from DB2 are used as the training set respectively. The imagesof ICE-Right and the reminder of DB2 are used as the testing setseverally.

3.2. QMs of defocus, motion blur and off-angle

The proposed individual QMs is tested in DB1. The Off-angle as-sessment is a coarse classification, it is not necessary to assess thedegree of off-angle but only to judge whether the iris image off-angleor not.

Table 1. CCR FOR different QMsClear Defocus Motion Off-angle

Num. 119 112 102 52DCCRthresh=8×10−6 100% 100% - -MCCRthresh=155 91.60% - 89.22% -

OCCR 100% - - 96.15%

We adopt the Correct Classification Rate (CCR) to evaluate theperformance of individual QMs. DCCRthresh=8×10−6 is definedas the CCR between the clear vs. defocused iris image, and the sameas the MCCRthresh=155 and OCCR. From Table 1 and Fig. 6, wecan see the proposed methods are robust and accurate, especially indefocus assessment.

3.3. Quality score and Quality levels

QS is the overall indicator as a prediction for the performance ofrecognition system. The higher the quality level of iris images is, thebetter the performance they get. As shown in Table 2, (qi and qi−1

stand for the adjacent QLs), taking the ICE as an example, whenthe number of quality levels is three, all of the p-values are zero.However, one more level is added, the p-value between q1 and q2 islarger than 0.005, it means that the distributions between q1 and q2are not significantly different and these two QLs should be combined

2011 18th IEEE International Conference on Image Processing

3180

Page 4: Comprehensive Assessment of Iris Image Quality

(a) (b)

Fig. 6. Distribution of QM assessment. (a) clear vs. defocusedimages (b) clear vs. motion blurred images

together. Besides, the EERs are not ascending along with the irisimage quality when the QLs is four. Three quality levels are suitableto ICE 1.0. Table 2 also illustrates there are more quality levels inDB2, it is a fact that the range of quality in DB2 is larger because itis from the original image sequence.

(a) (b)

Fig. 7. The ROC curves of different quality levels. (a) ROC curveson ICE 1.0 test set,(b) ROC curves on DB2 test set

Figure 71 displays the ROC curves on test image sets with dif-ferent QLs of ICE 1.0 and DB2 respectively. The recognition systemwith the iris images of higher quality degree gets better performancethan that with lower quality. The improvement of different qualitylevels in DB2 is larger than ICE 1.0, it demonstrates that the qualityscore is robust to predict the performance of iris recognition.

Table 2. T -test for deciding the number of quality levelsICE 1.0 DB2

Quality Level EER Pvalue qi qi−1 Quality Level EER Pvalue qi qi−1

1 0.000928 - 1 0.044489 -2 0.003257 0 2 0.051122 03 0.007694 0 3 0.090424 0

4 0.205940 01 0.000437 - 1 0.044702 -2 0.004223 0.4326 2 0.030495 0.13413 0.003150 0 3 0.074817 04 0.009209 0 4 0.092421 0

5 0.223850 0

4. CONCLUSIONS

In this paper, we have proposed new methods to assess the quality ofiris images affected by defocus, motion blur and off-angle. A simpleyet effective fusion approach is presented for fusing all the quality

1In matching stage, we adopt the two lobe OM proposed in [12]

measures to a quality score. Extensive experiments demonstrate thequality score is robust to predict the performance of the iris recogni-tion. In future, we can enhance the iris image based on the qualityscore to improve the performance of the system.

5. ACKNOWLEDGEMENT

This work is funded by National Natural Science Foundation ofChina (Grant No. 60736018, 61075024) and International S&TCooperation Program of China (Grant NO.2010DFB14110).

6. REFERENCES

[1] J. Daugman, “How iris recognition works,” IEEE Trans. onCircuits and Systems for Video Technology, vol. 14, no. 1, pp.21–30, 2004.

[2] Z.Wei, T.Tan, Z.Sun, and J.Cui, “Robust and fast assessmentof iris image quality,” in Proc. Int. Conf. on Biometrics. 2005,vol. 3832, pp. 464–471, LNCS 3832.

[3] N.D. Kalla, J.Zuo, N.A.Schmid, and B.Cukic, “Estimating andfusing quality factors for iris biometric images,” IEEE Trans.on Systems, Man, and Cybernetics—Part A: Systems and Hu-mans, vol. 40, no. 3, pp. 509–524, 2010.

[4] C.Belcher and Y.Du, “A selective feature information approachfor iris image-quality measure,” IEEE Trans. on InformationForensics and Security, vol. 3, no. 3, pp. 572–577, 2008.

[5] J.Zuo, F.Nicolo, Natalia A.Schmid, and H.Wechsler, “Adaptivebiometric authentication using nonlinear mappings on qualitymeasures and verification scores,” in Proc. ICIP, 2010.

[6] P.Grother and E.Tabassi, “Performance of biometric qualitymeasures,” IEEE Trans. on Pattern Analysis and Machine In-telligence, vol. 29, no. 4, pp. 531–543, April 2007.

[7] K.Nandakumar, Y.Chen, S.C.Dass, and A.K. Jain, “Likelihoodratio-based biometric score fusion,” IEEE Trans. on PatternAnalysis and Machine Intelligence, vol. 30, no. 2, pp. 342–347,February 2008.

[8] J.Zuo and N.A.Schmid, “Global and local quality measures fornir iris video,” in Proc. of the 2009 IEEE CVPR Workshops,,Miami, FL, June 2009, pp. 1–6.

[9] Z.He, T.Tan, Z.Sun, and X.Qiu, “Toward accurate and fastiris segmentation for iris biometric,” IEEE Trans. on PatternAnalysis and Machine Intelligence, vol. 31, pp. 1670–1684,Sep. 2009.

[10] Wikipedia, “Student’s t-test — wikipedia, the free encyclope-dia,” 2011, [Online; accessed 17-January-2011].

[11] J. Daugman, “The importance ofbeing random: statistical prin-ciples of iris recognition,” Pattern Recognition, vol. 36, pp.279–291, 2003.

[12] Z.Sun and T.Tan, “Ordinal measures for iris recognition,”IEEE Trans. on Pattern Analysis and Machine Intelligence,vol. 31, no. 12, pp. 2211–2226, Dec. 2009.

2011 18th IEEE International Conference on Image Processing

3181