15
Nonintrusive iris image acquisition system based on a pan-tilt-zoom camera and light stripe projection Soweon Yoon Ho Gi Jung Yonsei University School of Electrical and Electronic Engineering 134 Shinchon-dong, Seodaemun-gu Seoul 120-749 Korea Kang Ryoung Park Dongguk University Biometrics Engineering Research Center Department of Electronics Engineering 26, Pil-dong 3-ga, Jung-gu Seoul 100-715 Korea Jaihie Kim Yonsei University School of Electrical and Electronic Engineering 134 Shinchon-dong, Seodaemun-gu Seoul 120-749 Korea E-mail: [email protected] Abstract. Although iris recognition is one of the most accurate biometric technologies, it has not yet been widely used in practical applications. This is mainly due to user inconvenience during the image acquisition phase. Specifically, users try to adjust their eye position within small capture volume at a close distance from the system. To overcome these problems, we propose a novel iris image acquisition system that pro- vides users with unconstrained environments: a large operating range, enabling movement from standing posture, and capturing good-quality iris images in an acceptable time. The proposed system has the follow- ing three contributions compared with previous works: 1 the capture volume is significantly increased by using a pan-tilt-zoom PTZ camera guided by a light stripe projection, 2 the iris location in the large capture volume is found fast due to 1-D vertical face searching from the user’s horizontal position obtained by the light stripe projection, and 3 zoom- ing and focusing on the user’s irises at a distance are accurate and fast using the estimated 3-D position of a face by the light stripe projection and the PTZ camera. Experimental results show that the proposed sys- tem can capture good-quality iris images in 2.479 s on average at a distance of 1.5 to 3 m, while allowing a limited amount of movement by the user. © 2009 Society of Photo-Optical Instrumentation Engineers. DOI: 10.1117/1.3095905 Subject terms: iris image acquisition; pan tilt zoom camera; light stripe projection. Paper 080693R received Sep. 3, 2008; revised manuscript received Jan. 6, 2009; accepted for publication Jan. 15, 2009; published online Mar. 10, 2009. 1 Introduction Biometrics is a method for automatic individual identifica- tion using a physiological or behavioral characteristic. 1 The value of biometric systems can be measured with five char- acteristics: robustness, distinctiveness, availability, accessi- bility, and acceptability. 1 Robustness refers to the fact that individual biometric features do not change over time and they can be used repeatedly. Distinctiveness refers to the fact that each individual has different characteristics of the features with great variation. Availability means the fact that all people ideally have certain biometric features in multiples. Accessibility refers to how easy the acquisition of biometric feature is, and acceptability refers to whether people regard the capturing of their biometric features as nonintrusive. In terms of the above characteristics, iris recognition is a powerful biometric technology for user authentication be- cause it offers high levels of robustness, availability, and distinctiveness. For robustness, it has been proven that iris structures remain unchanged with age. 2 A person’s irises generally mature during first 2 years of age, and then healthy irises vary little for the rest of that person’s life. 2 For availability, every person has an iris with complex pat- terns formed by multilayered structures. 2 Also, each indi- vidual has two distinguishable left and right iris patterns. The distinctiveness of iris is shown by its unique and abun- dant phase structures. According to 2 million iris comparisons, 3 binary code extracted from an iris image showed 244 independent degrees of freedom. This implies that the probability of two different irises agreeing by chance in more than 70% of their phase sequences is about 1 in 7 billion. 4 The level of accessibility and acceptability of iris recog- nition, however, is lower than that of other biometric fea- tures such as the face, the fingerprint, or the gait recogni- tion. This is mainly due to the fact that it is difficult to acquire iris images. In terms of accessibility, iris image acquisition is not simple; conventional iris recognition sys- tems usually require a well-trained operator, a cooperative user, adjusted equipment, and well-controlled lighting conditions. 5 Lack of any of these factors will lead to user inconvenience as well as poor quality iris capture. Accord- ing to a report 6 on participants’ experience of using various biometric authentication systems at an airport in 2005, common complaints about iris recognition systems were about positioning problems and the amount of time taken. Conventional iris recognition systems such as IrisAc- cess3000 Ref. 7 and BMET300 Ref. 8 generally require high user cooperation during iris image acquisition. Users try to adjust their eye position to place their eyes in an acceptable position to provide the iris recognition system with an in-focus iris image. This positioning problem comes from the fact that the capture volume of the conven- 0091-3286/2009/$25.00 © 2009 SPIE Optical Engineering 483, 037202 March 2009 Optical Engineering March 2009/Vol. 483 037202-1

Nonintrusive iris image acquisition system based on … Gi Jung Homepage...Nonintrusive iris image acquisition system based on a pan-tilt-zoom camera and light stripe projection Soweon

  • Upload
    others

  • View
    10

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Nonintrusive iris image acquisition system based on … Gi Jung Homepage...Nonintrusive iris image acquisition system based on a pan-tilt-zoom camera and light stripe projection Soweon

Nop

SHYS1S

KDBD2S

JYS1SE

1

Btvabitfftmopn

pcdsghFtv

0

Optical Engineering 48�3�, 037202 �March 2009�

O

onintrusive iris image acquisition system basedn a pan-tilt-zoom camera and light striperojection

oweon Yoono Gi Jungonsei Universitychool of Electrical and Electronic Engineering34 Shinchon-dong, Seodaemun-gueoul 120-749 Korea

ang Ryoung Parkongguk Universityiometrics Engineering Research Centerepartment of Electronics Engineering6, Pil-dong 3-ga, Jung-gueoul 100-715 Korea

aihie Kimonsei Universitychool of Electrical and Electronic Engineering34 Shinchon-dong, Seodaemun-gueoul 120-749 Korea-mail: [email protected]

Abstract. Although iris recognition is one of the most accurate biometrictechnologies, it has not yet been widely used in practical applications.This is mainly due to user inconvenience during the image acquisitionphase. Specifically, users try to adjust their eye position within smallcapture volume at a close distance from the system. To overcome theseproblems, we propose a novel iris image acquisition system that pro-vides users with unconstrained environments: a large operating range,enabling movement from standing posture, and capturing good-qualityiris images in an acceptable time. The proposed system has the follow-ing three contributions compared with previous works: �1� the capturevolume is significantly increased by using a pan-tilt-zoom �PTZ� cameraguided by a light stripe projection, �2� the iris location in the large capturevolume is found fast due to 1-D vertical face searching from the user’shorizontal position obtained by the light stripe projection, and �3� zoom-ing and focusing on the user’s irises at a distance are accurate and fastusing the estimated 3-D position of a face by the light stripe projectionand the PTZ camera. Experimental results show that the proposed sys-tem can capture good-quality iris images in 2.479 s on average at adistance of 1.5 to 3 m, while allowing a limited amount of movement bythe user.© 2009 Society of Photo-Optical Instrumentation Engineers. �DOI: 10.1117/1.3095905�

Subject terms: iris image acquisition; pan tilt zoom camera; light stripe projection.

Paper 080693R received Sep. 3, 2008; revised manuscript received Jan. 6, 2009;accepted for publication Jan. 15, 2009; published online Mar. 10, 2009.

Introduction

iometrics is a method for automatic individual identifica-ion using a physiological or behavioral characteristic.1 Thealue of biometric systems can be measured with five char-cteristics: robustness, distinctiveness, availability, accessi-ility, and acceptability.1 Robustness refers to the fact thatndividual biometric features do not change over time andhey can be used repeatedly. Distinctiveness refers to theact that each individual has different characteristics of theeatures with great variation. Availability means the facthat all people ideally have certain biometric features in

ultiples. Accessibility refers to how easy the acquisitionf biometric feature is, and acceptability refers to whethereople regard the capturing of their biometric features asonintrusive.

In terms of the above characteristics, iris recognition is aowerful biometric technology for user authentication be-ause it offers high levels of robustness, availability, andistinctiveness. For robustness, it has been proven that iristructures remain unchanged with age.2 A person’s irisesenerally mature during first 2 years of age, and thenealthy irises vary little for the rest of that person’s life.2

or availability, every person has an iris with complex pat-erns formed by multilayered structures.2 Also, each indi-idual has two distinguishable left and right iris patterns.

091-3286/2009/$25.00 © 2009 SPIE

ptical Engineering 037202-

The distinctiveness of iris is shown by its unique and abun-dant phase structures. According to 2 million iriscomparisons,3 binary code extracted from an iris imageshowed 244 independent degrees of freedom. This impliesthat the probability of two different irises agreeing bychance in more than 70% of their phase sequences is about1 in 7 billion.4

The level of accessibility and acceptability of iris recog-nition, however, is lower than that of other biometric fea-tures such as the face, the fingerprint, or the gait recogni-tion. This is mainly due to the fact that it is difficult toacquire iris images. In terms of accessibility, iris imageacquisition is not simple; conventional iris recognition sys-tems usually require a well-trained operator, a cooperativeuser, adjusted equipment, and well-controlled lightingconditions.5 Lack of any of these factors will lead to userinconvenience as well as poor quality iris capture. Accord-ing to a report6 on participants’ experience of using variousbiometric authentication systems at an airport in 2005,common complaints about iris recognition systems wereabout positioning problems and the amount of time taken.

Conventional iris recognition systems such as IrisAc-cess3000 �Ref. 7� and BMET300 �Ref. 8� generally requirehigh user cooperation during iris image acquisition. Userstry to adjust their eye position to place their eyes in anacceptable position to provide the iris recognition systemwith an in-focus iris image. This positioning problemcomes from the fact that the capture volume of the conven-

March 2009/Vol. 48�3�1

Page 2: Nonintrusive iris image acquisition system based on … Gi Jung Homepage...Nonintrusive iris image acquisition system based on a pan-tilt-zoom camera and light stripe projection Soweon

tvtpsqacimo

aanOzICw2wcc�iavpttert

Ptu1TsceFabptCfmvRitaipts

T

Yoon et al.: Nonintrusive iris image acquisition system…

O

ional systems is small. The capture volume refers to theolume within which an eye must be placed for the systemo acquire useful iris images.9 Once the iris of the user islaced in the capture volume, users should stay in that po-ition without any motion until the system acquires a good-uality image. Since the capture volume is usually formedt a close distance from the camera, users face the systemlosely. The positioning takes a lot of time for users, and its likely to fail on untrained users who are relatively unfa-

iliar with the system. Some children and disabled usersften find it difficult to follow the given instructions.

Toward convenient iris recognition systems for usersnd civil applications such as immigration procedures atirports which target generally untrained users, two types ofew iris image acquisition systems have been proposed.ne is a portal system and the other is based on a pan-tilt-

oom �PTZ� camera. The portal system, which is calledris-on-the-Move �IOM� suggested by Sarnofforporation,9 enables the capture of iris images while usersalk through an open portal. IOM has a throughput up to0 persons /min when the users pass through the portalith a normal walking pace of 1 m /s. However, a position

onstraint remains because its capture volume is as small asonventional ones: 20�20�10 cm �width�heightdepth�. Therefore, iris image acquisition fails if a user’s

rises do not pass through the small capture volume. Inddition, the capture volume can not fully cover the heightariations of users; children or very tall users may not beermissible. They suggest a modular component to expandhe height of the capture volume; two cameras stacked ver-ically expand it by approximately 37 cm, and four camerasxpand it up to 70 cm. However, the stack of multiple high-esolution cameras would increase the costs proportional tohe number of cameras.

A PTZ camera can increase the capture volume greatly.anning and zooming cover various position of users, and

ilting covers height variation of the user. Early attemptssing a PTZ function are reported by Oki IrisPass-M �Ref.0�, Sensar R1 �Ref. 11�, and Mitsubishi Corporation.12

hey are based on a wide-angle camera or a stereo visionystem for locating the eye, and a narrow-angle camera forapturing the iris image. For fast control of the PTZ cam-ra, reconstructing the 3-D position of the iris is essential.irst, 3-D coordinates can determine panning and tiltingngle as well as zoom factor. Second, depth informationetween the iris and the camera from the 3-D coordinateslays an important role to narrow the search range for op-imal focus lens position. The system from the Mitsubishiorporation uses a single wide-angle camera to detect a

ace, which leads to adaptive panning and tilting and esti-ates depth by disparity among facial features, which ob-

iously takes a lot of time to get clear iris images. Sensar1 uses stereo matching for 3-D reconstruction. However,

n stereo matching it is complicated and takes a long timeo detect the corresponding points between a pair of im-ges. The accuracy of the depth estimation can be degradedf users are far from the camera due to errors in the featureoint extraction. To increase the accuracy of depth estima-ion of irises at a distance, the disparity of stereo camerashould be large, and this will increase the system size.

Recently, Retica Eagle-Eyes,13 Sarnoff IOM Drive-hrough system,14 and AOptix system15 have been intro-

ptical Engineering 037202-

duced as PTZ-based systems. Eagle-Eyes proposed the irisrecognition system with large capture volume �3�2�3 m� and a long standoff �3 to 6 m�. However, becauseof its system complexity, which consists of four cameras—scene camera, face camera, and left and right iriscameras—the cost and size of the system would be high. Inaddition, the capture time of the system is 6.1 s on averagefor a stationary subject, which is long compared to previoussystems, and users may feel an intrusiveness during imageacquisition. The organization and specification of other sys-tems are still unknown.

In this paper, we propose a novel iris image acquisitionsystem based on a PTZ camera guided by a light stripeprojection. A telephoto zoom lens with a pan-tilt unit ex-pands the capture volume greatly: 120 deg �width��1 m�height��1.5 m �depth�. Thus, users do not need to makean effort to adjust their position. Due to the PTZ ability, justone high-resolution camera is required to cover the wholecapture volume. For a fast PTZ control, which is necessaryto realize in a practical application scenario, we propose a3-D estimation method for the face based on a light stripeprojection. This contributes to fast face search and determi-nation of proper zoom and focus lens position. Since thelight stripe projection gives the horizontal position of a userin a real time, the pan angle is always determined immedi-ately and the user’s face can be found by searching a 1-Dvertical line rather than searching a 2-D area of the wholecapture volume. Once the face is detected, the depth be-tween the face and the PTZ camera is calculated with ahigh accuracy and it gives the initial zoom and focus lensposition based on relationships among distance, zoom lensposition, and focus lens position under fixed magnification.We assumed minimally constrained user cooperation:standing naturally in the capture volume and staring at thePTZ camera for 1 to 2 s during autofocusing. Under thisassumption, we examined the feasibility of the proposedsystem in practical situations. The proposed system has thefollowing three contributions compared with previousworks: �1� the capture volume is greatly increased by usinga PTZ camera guided by a light stripe projection, �2� thePTZ camera can track a user’s face easily in the large cap-ture volume based on 1-D vertical face searching from theuser’s horizontal position obtained by the light stripe pro-jection, and �3� zooming and focusing on the user’s irises ata distance are accurate and fast using the estimated 3-Dposition of a face by the light stripe projection and the PTZcamera. This paper realizes the PTZ-based iris image ac-quisition system, which is the most popular approach to thenext generation of iris recognition system and gives tech-nical descriptions, so that it can be a helpful reference forresearchers in iris recognition field.

The rest of this paper is organized as follows. Section 2describes the overall procedure of the proposed system andoutlines some design issues in terms of acceptability andaccessibility. Section 3 presents a method of 3-D face co-ordinate determination based on a light stripe projection.Section 4 describes zooming and focusing methods for thePTZ camera to get useful iris images based on the esti-mated depth in Sec. 3. Section 5 gives experimental resultson the feasibility of the proposed system, its availability for

March 2009/Vol. 48�3�2

Page 3: Nonintrusive iris image acquisition system based on … Gi Jung Homepage...Nonintrusive iris image acquisition system based on a pan-tilt-zoom camera and light stripe projection Soweon

rtn

2TuufiperTiprrll

c3str2pgldtmnOcfd

Yoon et al.: Nonintrusive iris image acquisition system…

O

ecognition of the iris images captured by the system, andhe accuracy and time required in practical application sce-ario. Finally, Sec. 6 provides conclusions.

System Overviewhe proposed system aims to acquire useful iris imagesnder an unconstrained user environment at a distance. Thenconstrained user environment means the following threeeatures. First, the large capture volume, as shown in Fig. 1,s created by a PTZ camera, which resolves positioningroblem. Second, both iris images of a user are obtainedven when the user makes a small movement by a high-esolution image sensor incorporated in the PTZ camera.hird, processing time is made acceptable for users by us-

ng the light stripe projection, which estimates the user’sosition in real time. Figure 2�a� shows the system configu-ation, which consists of a PTZ camera with a high-esolution image sensor, a wide-angle camera for detectingight stripes, a light plane projector, and near-IR �NIR� il-uminators for imaging rich texture of irises.

To control the PTZ camera accurately and quickly toapture a user’s iris images in the large capture volume, a-D face coordinate estimation method based on the lighttripe projection can determine initial values for panning,ilting, zooming, and focusing. Thus, it helps narrow theanges for finding the optimal values of PTZ control. Figure�b� presents a flow chart for the iris image acquisitionrocedure of the proposed system. Light stripe projectionives the horizontal position of a user in real time usingight stripes on the user’s leg and the horizontal positionirectly determines pan angle. Thus, the PTZ camera canurn toward the user and track the user when the user is inotion. The user’s face is found on the 1-D vertical line

ormal to the ground while the PTZ camera tilts upward.nce the face is detected, the distance between the PTZ

amera and the face is calculated from the estimated 3-Dace coordinate. Using preestimated relationships amongistance, zoom lens position, and focus lens position with a

Fig. 1 Large capture volume of the proposed system.

ptical Engineering 037202-

fixed magnification, the initial zoom and focus lens posi-tions are determined. Due to the high accuracy of the initialposition of each lens, only a small amount of focus refine-ment is required to get in-focus iris images. Since theheight of the user is fixed after the 3-D face coordinate isdetermined, the face can be tracked using newly updatedhorizontal position and the height. Each part of the pro-posed system is designed to maximize user convenience, beeconomical, and work feasibly in practical applications.

2.1 PTZ CameraOne part of our proposed system is the PTZ camera set,which consists of a pan-tilt unit, a telephoto zoom lens, anda high-resolution image sensor. Ranges for panning, tilting,and zooming should cover the entire target capture volume.Pan and tilt ranges of the pan-tilt unit are 360 and 90 deg,respectively, which are sufficient for our target capture vol-ume. Also, the speed of the pan-tilt unit is fast enough totrack a walking user; the pan and tilt speeds are 64 and43 deg /s, respectively.

The telephoto zoom lens should cover the depth range ofthe capture volume and have the desired standoff. The lensshould zoom in the irises of users who are in the target

Fig. 2 System overview: �a� system configuration, and �b� flowchart.

March 2009/Vol. 48�3�3

Page 4: Nonintrusive iris image acquisition system based on … Gi Jung Homepage...Nonintrusive iris image acquisition system based on a pan-tilt-zoom camera and light stripe projection Soweon

diciaedE�u

wtIvqlltfiwdpi

swsiaifnlmtrwtr�

brqttrdllwtfip

Yoon et al.: Nonintrusive iris image acquisition system…

O

epth range of the capture volume is 1.5 to 3 m so that themages have the enough resolution for iris recognition. Ac-ording to iris image quality standards,16 the diameter ofris images must be greater than 150 pixels to be considereds at least medium quality. Based on the fact that the diam-ter of the iris diris is 1 cm and that of the image of the irisimage is 150 pixels, and the magnification M is 0.111 fromq. �1� when a cell size of the image sensor is 7.47.4 �m. Then, the required focal length can be estimated

sing Eq. �2�.

M =d

D=

dimage

diris, �1�

1

f=

1

D+

1

d, �2�

here f represents the focal length, d represents the image-o-lens distance, and D represents the user-to-lens distance.n the proposed system, zoom lenses with focal lengthsarying from 149.865 to 299.730 mm are generally re-uired. The telephoto zoom lens used here has a focalength17 of 70 to 300 mm, which guarantees that the reso-ution of the iris images is at least 150 pixels in diameter inhe target capture volume. In addition, the standoff is de-ned by the closest focusing distance of the zoom lens,hich means that the lens can not focus on objects at theistance closer than the closest focusing distance and is ahysical lens characteristic. According to the closest focus-ng distance of the lens, the standoff is 1.5 m.

A high-resolution image sensor of the PTZ camerahould capture useful iris images with enough resolution asell as at a distance easily. Most iris image acquisition

ystems at a distance use a strategy of capturing a full-facemage by a high-resolution camera instead of capturing justn iris image. One advantage of this strategy is that bothris images can be obtained from a given high-resolutionace image, which shows better performance for iris recog-ition than one-iris matching. Another advantage is that ateast an iris remains in the captured image even when usersove slightly. To get a full-face image guaranteeing that

he diameter of each iris image is 150 pixels, the imageesolution on each side must be at least 1950 pixels if theidth of a given face is around 15 cm and the diameter of

he iris is around 1 cm. The resolution of the high-esolution camera in the proposed system is18 4 megapixels2048�2048 pixels�.

NIR illuminators radiating light in the 700- to 900-mmand are necessary because even dark brown irises revealich textures.19 However, high-power illuminators are re-uired to obtain useful iris images for recognition at a dis-ance because the large f-number of the zoom lens reduceshe light energy incident to the image sensor. The f-numberefers to the ratio of focal length to the effective apertureiameter.20 In this case, the large f-number is caused by theong focal length and small effective aperture of the zoomens. The long focal length of the zoom lens is requiredhen we zoom in on an object from a distance. The size of

he effective aperture shrinks to hold the large depth ofeld, which is necessary for robust focusing. In general, theower of an NIR illuminator must be selected to maximize

ptical Engineering 037202-

the trade-off between obtaining sufficiently bright imagesand guaranteeing eye safety. The overall intensity variationof captured images according to changing zoom factor iscompensated by adjusting camera gain and shutter speedbased on the distance between the camera and the user.

2.2 Light Stripe ProjectionAnother part of the proposed system is the implementationof a light stripe projection. It consists of a light plane pro-jector and a wide-angle camera. The projected light planeshould cover the horizontal range of the capture volume,which is 120 deg in width and 1.5 m in depth. The lightplane projector generates the NIR light plane with a wave-length of 808 nm, which is invisible to the human eye. Theangle of the light plane is 120 deg and is set up horizontallyat a height of around 20 cm to illuminate the given user’sleg, as shown in Fig. 3. The intersection of the light planewith an object surface is visible as a light stripe in theimage.21 The wide-angle camera detects the light stripes onthe user’s leg. The field of view �FOV� of the wide-anglecamera is coincident to the angle of the light plane to ob-serve the whole light plane area. A visible cut filter is at-tached to the wide-angle camera to block visible light fromother light sources such as indoor illuminators and sunlight.

3 Estimation of 3-D Face CoordinatesEstimating the 3-D coordinates of a given user’s face con-sists of three phases: light stripe detection, horizontal posi-tion estimation, and vertical position estimation. Lightstripe projection provides the horizontal position, which de-termines the panning angle directly. It enables the PTZcamera to track the user horizontally until the user stops foriris recognition. Then, the face is found while the PTZ cam-era tilts along a 1-D line normal to the ground. Based onthe horizontal position of the user and the tilt angle where

Fig. 3 Detection of light stripes on the given user’s leg: �a� back-ground image with light stripes, �b� detected background light stripesin the ROI, �c� a new wide-angle camera image with the user, and�d� the light stripes on the user’s leg detected by CC-based back-ground subtraction.

March 2009/Vol. 48�3�4

Page 5: Nonintrusive iris image acquisition system based on … Gi Jung Homepage...Nonintrusive iris image acquisition system based on a pan-tilt-zoom camera and light stripe projection Soweon

tdd

3LtppibtstTusit

botbamhlio

ebolpgpplaFnsrrc

usnrrfiaud

3Tra

Yoon et al.: Nonintrusive iris image acquisition system…

O

he face appears in the center of the image, the 3-D coor-inates of the face are determined in the PTZ camera coor-inates.

.1 Light Stripe Detectionight stripe projection is a 3-D reconstruction technique

hat is based on structured lighting. By projecting a lightlane into an object scene, the 3-D coordinates of imageoints on the light stripes can be recovered from a singlemage.21 In general, light stripe projection is implementedy the following three steps. The first step is detecting en-ire light stripes in wide-angle camera images. These lighttripes include both those on the background objects andhose on a given user’s leg as shown in Figs. 3�a� and 3�c�.he second step is distinguishing the light stripes on theser’s leg and transforming the center point of those lighttripes into an undistorted image coordinate. The third steps reconstructing the 3-D coordinates of the center point inhe wide-angle camera coordinate system.

Light stripes in a wide-angle camera image are detectedy convolving each image column with the 1-D Laplacianf Gaussian �LoG� mask. This is based on the assumptionhat light stripes appear at one point on each image columnecause the light plane is scattered horizontally. A point ofcolumn is regarded as light stripe if the point has theaximum LoG response in the column and the response is

igher than the given threshold. Figure 3�b� presents theight stripes detected from Fig. 3�a� within the region ofnterest �ROI�, which corresponds to the horizontal regionf the capture volume.

Among the detected light stripes, those on the given us-r’s leg are extracted by connected-component �CC�-basedackground subtraction, which eliminates the light stripesn background objects. We assume that the backgroundight stripe image is obtained in advance. If the light stripeoints in the adjacent columns are neighbors, they are re-arded as the CC. The CC-based background subtractionrocess removes the CCs in a new input image that overlapartially or totally with the background CC in the sameocation. Consequently, the light stripe remaining in the im-ge is considered as the light stripe on a coming user’s leg.igure 3�d� shows the detected user’s light stripes from aew input image �Fig. 3�c�� using CC-based backgroundubtraction. The CC-based background subtraction is moreobust than pixel-based background subtraction, which canesult in strong errors even if the background or cameraonfigurations change slightly.

The center point of the light stripes on the user’s legs issed to estimate that user’s horizontal position. To compen-ate radial distortion on the wide-angle camera, the coordi-ates of the center point are rectified by the radial distortionefinement method addressed in Ref. 22. In this case, theectification process is done fast since the light stripes arerst detected in a raw image with radial distortion and onlysingle point—the center point of the light stripes on the

ser’s legs—is then transformed into an undistorted coor-inate.

.2 Horizontal Position Estimationhe key idea of the light stripe projection technique for 3-D

econstruction is to intersect the projection ray of the ex-mined image point with the light plane.21 In Fig. 4�a�, the

ptical Engineering 037202-

reconstructed ray passing through both the image pointp�x ,y� and the origin of the wide-angle camera coordinatesystem meets with the light plane at a certain pointP�Xwide ,Ywide ,Zwide�. The 3-D coordinates of the intersec-tion are obtained23 by Eq. �3�:

�Xwide =

xb tan � cos �

f − tan ��x sin � + y cos ��

Ywide =yb tan � cos �

f − tan ��x sin � + y cos ��

Zwide =fb tan � cos �

f − tan ��x sin � + y cos ��� , �3�

where � represents the angle between the light plane andthe Ywide axis, � represents the angle between the lightplane and the Xwide axis, b represents the baseline, f repre-sents the focal length of the wide-angle camera, and �x ,y�represents the rectified center point of the light stripe.

The 3-D coordinates corresponding to �x ,y� are recon-structed directly if the focal length f of the camera and thegeometric parameters between the camera and the lightplane, �, b, and � are known. We assumed that �=0 sincethe light plane is set up parallel to the ground. Then, Eq. �3�is reduced to

Fig. 4 Reconstruction of the 3-D coordinates of the light stripe onthe user’s leg and its transformation to the PTZ camera coordinatesystem:24 �a� general light stripe projection geometry, in this case,�=0; and �b� coordinate transformation from the wide-angle cameracoordinate system to the PTZ camera coordinate system.

March 2009/Vol. 48�3�5

Page 6: Nonintrusive iris image acquisition system based on … Gi Jung Homepage...Nonintrusive iris image acquisition system based on a pan-tilt-zoom camera and light stripe projection Soweon

�TlcdoPol

wtdcaafwt

�TAt

3Fmoud

Ftot

Yoon et al.: Nonintrusive iris image acquisition system…

O

Xwide =xb tan �

f − y tan �

Ywide =yb tan �

f − y tan �

Zwide =fb tan �

f − y tan �

� . �4�

he remaining parameters, � and b, are obtained by theeast-square estimation using the last equation of Eq. �4� byollected data of the distance to an object Zwide and y coor-inate of its light stripe.24 Then, the reconstructed 3-D co-rdinates of a scene point on the light stripe,�Xwide ,Ywide ,Zwide�, are obtained exactly from the imagef P, p�x ,y�. This implies that 3-D reconstruction based onight stripe projection is a real-time operation.

The reconstructed point P�Xwide ,Ywide ,Zwide� in theide-angle camera coordinate system is transformed into

he PTZ camera coordinate system. The PTZ camera coor-inate system is the rigidly transformed wide-angle cameraoordinate system; it is rotated by � around the Xwide axisnd then translated to dZwide-PTZ in the direction of the ZPTZxis, as shown in Fig. 4�b�. Equation �5� shows the trans-ormation from �Xwide ,Ywide ,Zwide� to �XPTZ,YPTZ,ZPTZ�,here hPTZ represents the height of the PTZ camera from

he ground:

XPTZ

YPTZ

ZPTZ� = �1 0 0

0 cos � − sin �

0 sin � cos ���Xwide

Ywide

Zwide� + � 0

hPTZ

dZwide-PTZ� . �5�

he height of the light plane YPTZ is irrelevant in this case.s a result, �XPTZ,0 ,ZPTZ� represents the horizontal posi-

ion of the user.

.3 Vertical Position Estimationigure 5�a� illustrates the overall panning and tilting controlethodology of the PTZ camera to find the 3-D coordinates

f the user’s face. Based on the horizontal position of theser �XPTZ,0 ,ZPTZ�, the panning angle �pan is determinedirectly by

ig. 5 Estimation of panning angle, tilting angle, and distance be-ween the PTZ camera and the face: �a� 3-D coordinate estimationf the user’s face and �b� 1-D face detection during stepwiseilting.24

ptical Engineering 037202-

�pan = tan−1XPTZ

ZPTZ . �6�

Since the pan angle based on the horizontal position isgiven in real time, the PTZ camera is able to track the userhorizontally.

When the user stops, the face is found while the PTZcamera tilts. The tilting angle that locates the face in thecenter of the image is found by using coarse and finesearching procedures. In the coarse searching phase, theface is detected in a few images obtained, while the PTZcamera tilts stepwise. Stepwise tilting partitions the heightof the capture volume exclusively, as shown in Fig. 5�b�.The angle for a tilting step and the number of steps for thestepwise tilting are determined by the FOV of the PTZcamera and the height of the capture volume so that thePTZ camera captures a different view at each tilting angleas well as covers the entire range of height variations. Thisis more efficient than continuous tilting, which covers du-plicated views. If the face is detected at a certain stepwisetilting angle using the AdaBoost algorithm,25 panning andtilting angles are refined to place the face in the imagecenter.

The ultimate tilt angle �tilt determines the distance Dbetween the PTZ camera and the user’s face as follows:

D =Zd

cos �tilt�7�

where Zd= �XPTZ2+ZPTZ

2�1/2.

4 Zoom and Focus ControlThe estimated distance between the PTZ camera and theuser’s face determines the initial zoom and focus lens po-sition so that it enables us to find an optimal focus lensposition quickly. Finally, the focus refinement process givesin-focus iris images.

4.1 Initial Zooming and FocusingGiven a level of magnification, the desired zoom and focuslens position are determined if the distance between thecamera and the object is known. The magnification M,which is fixed for iris images to have enough resolution,yields the image-to-lens distance d based on the user-to-lens distance D in Eq. �1�. Since d is mapped 1-to-1 to thezoom lens position Zoom, D is eventually mapped 1-to-1 toZoom. Given D and Zoom values, the optimal focus lensposition Focus, which produces in-focus image is deter-mined.

To give the initial zoom and focus lens position at anarbitrary distance of D, the functional relationships, �1� be-tween D and Zoom and �2� between Zoom and Focus, areapproximated by collected observations. By changing thedistance D of a given user by 5 cm from the PTZ camerawithin the capture volume, the optimal zoom and focus lensposition that satisfy the conditions for iris images in termsof resolution and sharpness were recorded at each distance.The optimal zoom lens position at each distance was manu-ally adjusted so that the diameter of the iris image was150 pixels. The optimal focus lens position was searchedautomatically by assessing sharpness of the iris image se-quence continuously captured while the focus lens position

March 2009/Vol. 48�3�6

Page 7: Nonintrusive iris image acquisition system based on … Gi Jung Homepage...Nonintrusive iris image acquisition system based on a pan-tilt-zoom camera and light stripe projection Soweon

mlslftsfFfmtt3

bv

�crlF

Z

F

tmfvgfppmDt

Yoon et al.: Nonintrusive iris image acquisition system…

O

oved around the optimal focus lens position. The focusens position in which the image had the highest focus mea-ure in the image sequence was chosen as the optimal focusens position. The iris image assessment was based on theocus measure kernel introduced in Ref. 26. The observa-ion of the optimal zoom lens position at each distance ishown in Fig. 6�a� as a dotted line and that of the optimalocus lens position at each zoom lens position is shown inig. 6�b� as a dotted line. A unit “step” of the zoom andocus lens position refers to a step size of the steppingotors, which rotate the zoom ring and the focus ring of

he zoom lens. The amount of a step can be calculated byhe fact that rotating the zoom and focus lens fully requires0,000 and 47,000 steps, respectively.

Based on the preceding observations, the relationshipetween D and Zoom is modeled as Eq. �8�. Zoom is in-ersely proportional to D. The unknown parameters p1 and

p2 are estimated using singular value decompositionSVD�. Similarly, the relationship between Zoom and Fo-us is modeled as Eq. �9�. Zoom and Focus have linearelationship. The parameters q1 and q2 are found usingeast-squares estimation. The fitting results are shown inigs. 6�a� and 6�b� as solid lines, respectively.

oom = p1 −p2

D, �8�

ocus = �q1Zoom� + q2 �9�

The D-based Zoom estimation proves to be more advan-ageous than the Zoom-based D estimation for focus refine-

ent. That is, the former produces a narrower search rangeor the optimal focus lens position than the latter. It is ob-ious that the error in the estimated distance D is propa-ated to the error in the Focus determined by using theunctional relationships. Clearly, minimizing the errorropagation is necessary to confine the optimal focus lensosition in the narrow search range for fast focus refine-ent process. Less severe error propagation during-based Zoom estimation can be explained by fundamental

heorem, which means that the output of the probability

Fig. 6 Calibrated initial zoom and focus lens pZoom and �b� between Zoom and Focus. Thesolid lines indicate estimated relationships.24

ptical Engineering 037202-

density function �pdf�, which passes through a function isinversely proportional to the magnitude of the derivative ofthe function.27 Figure 7 compares two cases of error propa-gations. In D-based Zoom estimation, the uncertainty of theoutput Zoom is reduced, as shown in Fig. 7�a�, since Zoomis inversely proportional to D. On the other hand, in Zoom-based D estimation, the uncertainty of D is increased, asshown in Fig. 7�b�. As a result, an accurately estimated D ispreferred for fast focus refinement.

4.2 Focus RefinementThe initial focus lens position estimated from D is usuallynot sufficiently accurate because D contains errors fromhorizontal position estimation and tilting angle determina-tion. Focus refinement is accomplished by searching for theoptimal focus lens position in the direction of maximizingthe focus measure of the captured iris images. Figure 8�a�shows the focus measure of an iris image sequence cap-tured while the focus lens position moves around the initialfocus lens position. The ridge of the focus measure curve isregarded as the optimal focus lens position. The maximumvalue of the focus measure is 100. As shown in Fig. 8�b�,the iris image obtained at the initial focus lens positionshows high value in the focus measure and the initial focuslens position is near the optimal focus lens position.

For the focus measure of the iris images, the eye regionsare segmented from the full-face images. One simple eyedetection method is to find the specular reflections on theeyes generated by the NIR illuminators. Specular reflec-tions usually appear as bright spots with high absolute gra-dient values and are surrounded with low gray values. Thecropped iris regions around the specular reflections are con-volved with the 2-D focus assessment kernel.26

The focus refinement algorithm in Ref. 28 consists oftwo phases: the coarse and fine searching phases, as shownin Fig. 9. Let � be a single step size of the focus lensposition for the fine searching phase. First, the coarsesearching phase roughly finds the optimal lens positionwith a large step size using the gradient-ascent method andnarrows the search range for the following fine searchingphase. In this stage, we set the step size of the coarse

s. Functional relationships �a� between D andlines indicate measured observations and the

ositiondotted

March 2009/Vol. 48�3�7

Page 8: Nonintrusive iris image acquisition system based on … Gi Jung Homepage...Nonintrusive iris image acquisition system based on a pan-tilt-zoom camera and light stripe projection Soweon

swccwsiptartwq

5TbIe

Yoon et al.: Nonintrusive iris image acquisition system…

O

earching as 4�. The focus lens moves by 4� synchronizedith the frame rate and the focus of the iris region in each

aptured image is assessed. The direction in which the fo-us lens moves in the next step is determined as the bestay to increase the focus measure. When the focus mea-

ure reaches its ridge, the optimal focus lens position existsn the confined range of 4�. Second, in the fine searchinghase, the optimal focus lens position is found by movinghe focus lens precisely. The focus of the iris images isssessed while the focus lens position moves by � in theange of the confined range in a direction opposite to that ofhe coarse searching phase. Therefore an in-focus imageith a maximum focus measure is selected from the se-uence.

Experimental Resultshe proposed iris image acquisition system was evaluatedased on two characteristics: acceptability and accessibility.n terms of acceptability, the conditions for convenientnvironments—large capture volume, tolerance to natural

Fig. 7 Error propagation of �a� D-based Zoomamounts of error in �a� and �b� are the same lengthan that in �b�. The propagated amount of erro

Fig. 8 Focus measure of an iris image sequencinitial focus lens position estimated by the propsequence. The asterisk indicates the focus mea�b� The enlarged dotted box in �a� �Ref. 24�.

ptical Engineering 037202-

movements, and time required for iris image capturing—were verified by means of a feasibility test of the iris im-ages captured by the system and a time evaluation on vari-ous users who participated in using the system. In terms ofaccessibility, the accuracy of panning, tilting, zooming andfocusing control of the PTZ camera guided by light stripeprojection were analyzed.

5.1 Feasibility of the Proposed Unconstrained UserEnvironments

The proposed system is designed to eliminate positioningproblems as well as to be tolerant of users’ natural move-ment while they are standing with natural posture. Theserequirements are achieved by providing the large capturevolume and by capturing face images at a high resolution,respectively. The capture volume was verified by a feasibil-ity test for the iris images acquired in the capture volumewhether they were available for iris recognition. The ro-bustness to user movements was analyzed by two factors:first, the high-resolution camera was able to capture irises

ation and �b� Zoom-based D estimation. Initialtted box�. Error propagation in �a� is less severech case is illustrated as a stripped box.24

ned by changing the focus lens position and theethod. �a� Focus measure of the entire image

f the iris image at the initial focus lens position.

estimth �dor in ea

e obtaiosed msure o

March 2009/Vol. 48�3�8

Page 9: Nonintrusive iris image acquisition system based on … Gi Jung Homepage...Nonintrusive iris image acquisition system based on a pan-tilt-zoom camera and light stripe projection Soweon

ufia

bittttgdbctIbd

cvci1d1osFwapair1tlf

Ft4towm

Yoon et al.: Nonintrusive iris image acquisition system…

O

nder left-and-right movements, and second, the depth ofeld of the PTZ camera was large enough to cover back-nd-forth movements.

The feasibility of the captured iris images was examinedy calculating the Hamming distance with the enrolled irismages of the same identity. The enrolled iris image referso images acquired by a laboratory-developed iris acquisi-ion camera that captures focused iris images with morehan 200 pixels in diameter at the distance of 15 cm underhe NIR illuminators of 750 and 850 nm, which guaranteesood-quality iris images for recognition. If the Hammingistance between an enrolled image and an image capturedy the proposed system is lower than a given threshold, theaptured iris image is identified as genuine. In other words,he image can be regarded as feasible for iris recognition.n the experiment, the iris codes were extracted by the Ga-or wavelet26 and the well-known19 threshold of Hammingistance of the algorithm is 0.32.

For the feasibility test, the iris images of a user wereollected by moving the position of the user in the captureolume. The depth from the PTZ camera to the userhanged by 5 cm within the range of 1.4 to 3 m, whichncluded the depth of the proposed capture volume �i.e.,.5 to 3 m�. At each position, the zoom lens position wasetermined to make the diameter of the iris images50 pixels. Then, the iris images were captured continu-usly while the focus lens position moved from −1000teps to +1000 steps around the optimal focus lens position.igure 10�b� shows several iris images that were capturedhile the focus lens position changed when the user was atdistance of 2 m. The focus lens positions in this range

roduced fully defocused iris images, in-focus iris images,nd fully defocused iris images in turn. A sequence of irismages captured at each distance was compared to the en-olled iris images in terms of the Hamming distance. Figure0�a� shows an example of the Hamming distance distribu-ion of the iris image sequence with respect to the focusens position when the user was at 2 m. In this figure, weound the available range of focus lens position that pro-

ig. 9 Focus refinement algorithm. In the coarse refinement phase,he ridge of the focus measure is found by moving the focus lens by�. In the fine refinement phase, the iris images are captured while

he focus lens position moves by �, and they are assessed in termsf focusing. The optimal focus lens position refers to the position athich the focus measure of the captured iris image is at the maxi-um value.

ptical Engineering 037202-

duced iris images with a lower Hamming distance than thethreshold. We called this range depth of focus.

5.1.1 Large capture volumeBased on the Hamming distance evaluation results at eachdistance, we were able to verify the depth of the capturevolume and measure the depth of field and the depth offocus of the system. The minimum and maximum focuslens positions of the available range at each distance aremarked in Fig. 11. The space between the minimum andmaximum focus lens positions represents the range inwhich the iris image has a Hamming distance lower thanthe threshold. In this figure, the depth of the proposed cap-ture volume, 1.5 to 3 m, was verified as feasible; iris im-ages acquired in the capture volume were useful for recog-

Fig. 10 �a� Hamming distance distribution of an iris image se-quence of a user at 2 m with respect to focus lens position. Thedepth of focus at 2 m was obtained from this. �b� Some examples29

of iris images captured at different focus lens positions at a distanceof 2 m.

Fig. 11 Depth of the capture volume, depth of field, and depth offocus.29

March 2009/Vol. 48�3�9

Page 10: Nonintrusive iris image acquisition system based on … Gi Jung Homepage...Nonintrusive iris image acquisition system based on a pan-tilt-zoom camera and light stripe projection Soweon

nft

5TwstctatFwllfitcci

aftmcsicqgb

feitTrcnp

Yoon et al.: Nonintrusive iris image acquisition system…

O

ition because the camera was able to find the optimalocus lens position to acquire good-quality iris images inerms of recognizability.

.1.2 Tolerance of natural movements of usershe depth of field of the proposed system was able to copeith back-and-forth user movements when the user was

tanding with natural posture. The depth of field refers tohe permissible depth variations of user under fixed lensonditions.29 This means that the iris images of a user cap-ured while the user moves within the depth of field are stillvailable for recognition without additional focusing con-rols. The depth of field at each distance can be estimated inig. 11. Figure 12�a� shows the estimated depth of fieldith respect to distance. Note that the graph in Fig. 12�a�

ooks continuous because the curve-fitting results of twoines in Fig. 11 were used for the evaluation of the depth ofeld. The depth of field tended to increase when the dis-

ance between the camera and the user increased. In theapture volume, the depth of field was 5 to 9.5 cm, whichovered the inevitable movements of users during the irismage acquisition phase.

While the depth of field shows system tolerance to back-nd-forth user movements, the strategy of capturing full-ace images with the high-resolution camera instead of cap-uring only iris images achieves tolerance to left-and-right

ovements. In normal situations, both iris images areropped from full face images. Even if the user’s positionhifts during the process, at least one iris usually still existsn the image. However, if a fully zoomed iris image isaptured in 640�480 pixels with a standard camera, it re-uires precise panning and tilting to capture the eye re-ions. Unfortunately, in general, this means that the iris cane lost from the image even if the user moves slightly.

We compared the motion tolerance of capturing a fullace with that of capturing an eye when the system wasxposed to the natural user movements. If the user appearedn the capture volume, the proposed system captured bothhe iris images from a high-resolution full-face image.hen, the user kept the initial position and stood with natu-

al posture for a minute. At the same time, the PTZ cameraaptured the face images every second without any pan-ing, tilting, zooming, or focusing. This experiment waserformed on 11 people and 10 times each. Figure 13�a�

Fig. 12 �a� Depth of field and �b� the depth of fo

ptical Engineering 037202-1

shows the initial full-face image captured by the high-resolution PTZ camera and the dotted box indicates the640-�480-pixel region around the iris. When the usermoved, the high-resolution camera still contained bothirises while the 640-�480-pixel region sometimes lost theiris, as shown in Fig. 13�b�. The movement of the users wasmeasured by the pixel distance between the iris center ofthe initial frame and that of the following frame, shown as

d̂ in Fig. 13�b�. Figure 13�c� shows a histogram of d̂. The

mean and standard deviations of d̂ were 122.86 and

the proposed system with respect to distance.29

Fig. 13 User movements measured on image when users werestanding naturally: �a� initial frame acquired by the PTZ camera athigh resolution, where the box indicates a region of 640�480 pixels; �b� an image from a 1-min image sequence in which

the eye escaped the initial eye region, here d̂ is the distance be-tween the iris center of the initial frame and that of the current frame;

and �c� the histogram of d̂.

cus of

March 2009/Vol. 48�3�0

Page 11: Nonintrusive iris image acquisition system based on … Gi Jung Homepage...Nonintrusive iris image acquisition system based on a pan-tilt-zoom camera and light stripe projection Soweon

93�pbomtt

5TtrescslrifcpseS1cttcw

5

Ieflmtebtttft

5

Thesiitdfs

Yoon et al.: Nonintrusive iris image acquisition system…

O

3.21 pixels, respectively. Considering that the margin was20 pixels in width from the center of the initial 640-480-pixel region, average movements of users caused

artial occlusion of eye regions, which led to a failedoundary detection of the eye and iris regions. Movementsver 200 pixels occurred about 17% of the time, whicheant that the iris pattern was lost from the FOV. However,

he full-face images always contained eye regions duringhe movements.

.1.3 Accessibility for in-focus iris imageshe depth of focus refers to the permissible error range of

he focus lens position to obtain feasible iris images forecognition. In the experiments, the depth of focus wasvaluated in the sense of iris recognition rather than in theense of optics, since even slightly defocused iris imagesould be identified correctly. The depth of focus is a mea-ure of the characteristic of accessibility since systems witharge depth of focus do not require elaborate focusing algo-ithms. Furthermore, large depth of focus brings fast irismage acquisition because the optimal focus lens position isound using large step sizes during the fine searching pro-ess. As shown in Fig. 12�b�, the depth of focus of theroposed system showed variations within 500 to 2000teps in the capture volume. This means that the controlrror of focusing was acceptable by at least 500 steps.ince the error of the initial focus lens position was around000 steps �see the next section�, either the iris imagesaptured at the initial focus lens position were available ifhe initial focus lens position was in the depth of focus, orhe optimal focus lens position could be found within theonfined search range even if the initial focus lens positionas out of the depth of focus.

.2 Accuracy of PTZ Control Based on Light StripeProjection

n the proposed PTZ control method, an accurate distancestimation between the PTZ camera and the given user’sace is necessary to determine the initial zoom and focusens positions, which can narrow the search range for opti-

al focus lens position. However, direct accuracy evalua-ion of the estimated distance is difficult because it is notasy to obtain the precise ground truth data of the distanceetween two points in a 3-D space. Instead of measuringhe distance estimation accuracy directly, the accuracy ofhe initial focus lens position was measured by observinghe error between the optimal focus lens position and initialocus lens position. If the initial focus lens position is nearhe optimal one, PTZ control can be regarded as accurate.

.2.1 Error of horizontal position estimation of theuser

he initial focus lens position error was induced by theorizontal position estimation error and the face detectionrror. The error of horizontal position estimation using lighttripe projection was due to limited image resolution; thats, quantization error. As shown in Fig. 14�a�, a single pixeln the image plane is matched with not a single point inhree dimensions, but a certain area. Therefore, estimatedepth by light stripe projection has uncertainty. Anothereature is that image of the light stripe on a close objecthows a lower level of ambiguity in depth than that of the

ptical Engineering 037202-1

light stripe on objects further away. The dotted line in Fig.14�b� represents the range of quantization error of depthestimation, which has more ambiguity in depth estimationat a further distance.

To evaluate depth estimation accuracy, we compared thehorizontal distance estimated by light stripe projection withthe distance measured by a laser distance-meter. A planeboard was used for accurate experiment, and its light stripeimages were collected by changing the distance from thewide-angle camera to the plane in the capture volume. InFig. 14�b�, measured errors are shown as a solid line. In thecapture volume, depth estimation by light stripe projectionwas mostly successful within the quantization error boundand the error bounded within �2 cm. However, depth esti-mation errors on objects at around 1.5 m occurred beyondthe quantization error bounds. This arose from the detectionerror of the center point of a light stripe because the lightstripes at a close distance were relatively thick, and then thecenter point changed with variations of 1 pixel at a time.Also, the error curve formed a zigzag shape since the planewas located inside the scene depth coverage of a pixel atrandom.

5.2.2 Face detection errorFace detection errors also affected the accuracy of the ini-tial focus lens position. We used a well-known face detec-tor provided by OpenCV.30 Since the face detector wastrained under visible illumination, we needed to show theperformance consistency of the face detector under NIRillumination. We collected 540 images from 35 users that

Fig. 14 Quantization error of depth estimation in light stripe projec-tion: �a� uncertainty of depth estimation due to limited pixel reso-lution and �b� measured errors at 43 different distances �solid line�and quantization error bound according to the distance �dotted line�.

March 2009/Vol. 48�3�1

Page 12: Nonintrusive iris image acquisition system based on … Gi Jung Homepage...Nonintrusive iris image acquisition system based on a pan-tilt-zoom camera and light stripe projection Soweon

ii1iecucwtRndd

5Fafpseebal

Yoon et al.: Nonintrusive iris image acquisition system…

O

ncluded faces of various sizes and locations under NIRllumination by specifying user’s position, as shown in Fig.5�a�. A set of images captured at each position, is shownn Fig. 15�b�. Note that this experiment was designed tostimate the performance of the face detector in the givenondition of illumination and various positions of varioussers, so the user’s position in Fig. 15�b� does not mean theapture volume. The size of the high-resolution face imageas reduced by 1 /100 because the speed of the face detec-

or dropped severely when the image size was too large.esizing the image also has another advantage; it elimi-ates false positives that occurred because of unnecessaryetails. The face detection rate was 98.7% in theatabase.29

.2.3 Initial focus lens position errorinally, the 3-D face coordinate estimation error results inn error of the initial focus lens position. The error of initialocus lens position was measured 100 times in a user’sosition, and it was repeated while changing the user’s po-ition from the PTZ camera in the capture volume. Thisxperiment was done with a mannequin, which avoidedrror due to the user’s movement. Figure 16 shows the errorar of the initial focus lens position estimated for 100 trialst each distance. In most cases, the mean of the initial focusens positions was lower than 1000 steps. However, the

Fig. 15 Face detection under the NIR illuminatithe PTZ camera, and �b� examples of captured

ptical Engineering 037202-1

initial focus lens position errors at around 1.5 m were large.This comes from large depth estimation errors �shown inFig. 14�b��. This means that the error of horizontal positionestimation propagated to the determination of the initialfocus lens position. Nevertheless, most initial focus lenspositions can be resilient to optimal focus lens positionsduring the focus refinement phase.

10 different user positions in the widest FOV ofages at each position with detection results.29

Fig. 16 Error bar of initial focus lens position with respect to dis-tance. The mean errors at most distances were less than 1000steps, which could mostly be compensated quickly by focus refine-ment, except for the errors at around 1.5 m. For each distance, thevariance of errors was negligible.

on: �a�face im

March 2009/Vol. 48�3�2

Page 13: Nonintrusive iris image acquisition system based on … Gi Jung Homepage...Nonintrusive iris image acquisition system based on a pan-tilt-zoom camera and light stripe projection Soweon

teatt110ad

5Ippuractwtdattwc

pitdfc3ilPFtsitsspptf

Ftt2

Yoon et al.: Nonintrusive iris image acquisition system…

O

Figures 17�b�–17�e� present iris images that were cap-ured by the proposed system at a distance. Compared tonrolled iris images acquired by laboratory-developed iriscquisition camera �Fig. 17�a��, they were of high quality inerms of recognizability; the Hamming distances betweenhe enrolled image and the captured images were Fig.7�b�, 0.201; Fig. 17�c�, 0.189; Fig. 17�d�, 0.240; and Fig.7�e� 0.240, which were lower than the given threshold,.32. Figure 18 presents several examples of both iris im-ges of various users captured by the proposed system at aistance during a real demonstration.

.3 Time Required for Iris Image Acquisitionn this section, we present the time of the entire acquisitionrocess. We measured the time required by nine partici-ants in a real situation, which included one experiencedser, two relatively less experienced users, and six inexpe-ienced users. The participants were instructed to stand atny position within the capture volume and stare the PTZamera during the image acquisition process. The time forilting, initial zooming and focusing, and focus refinementas measured separately right after the user stopped. The

ime for panning was not recorded, because panning wasone continuously while the user moved. Table 1 shows theverage time evaluation for each phase. The frame rate ofhe PTZ camera was 8 frames /s. The average time requiredo obtain in-focus iris images using the proposed systemas 2.479 s �with Intel Core™2 CPU, 2.4 GHz�, which is

omparable to conventional iris recognition systems.The time for tilting depends on the user’s height. The

rocess includes time for tilting the PTZ camera and detect-ng the face. However, since only a few of the images cap-ured during stepwise tilting were used, the time variationsue to height variation were not critical. The time requiredor initial zooming and focusing was fairly constant be-ause the lens positions were directly determined by the-D face coordinates. There were slight variations accord-ng to distance; the zoom lens rotated more and the focusens rotated less when a user was farther away from theTZ camera. But the time variations were not significant.or the time required for focus refinement, the proximity of

he initial focus lens position to the optimal focus lens po-ition was a critical factor. Based on the finding that thenitial focus lens position is usually located around the op-imal focus lens position within fewer than 1000 steps, weet a single step size for the fine searching phase � as 50teps, and a consequent step size for the coarse searchinghase 4� as 200 steps. This means that the coarse searchinghase took less than five frames in most cases. Moreover,he fine searching phase used a firmly bounded number oframes within a confined range. The experimental results

ig. 17 Iris images captured at a distance: �a� enrolled image cap-ured by a conventional iris recognition system, and iris images ofhe same person captured using the proposed system at �b� 1.5, �c�.0, �d� 2.5, and �e� 3.0 m �Ref. 29�.

ptical Engineering 037202-1

show that eight to nine frames were taken to obtain in-focusiris images during the focus refinement stage.

6 ConclusionsA novel iris image capturing system was proposed to im-prove acceptability and accessibility of iris recognition sys-

Fig. 18 Examples of left and right iris images captured by the pro-posed system. The distances are the estimated value by light stripeprojection and tilt angle estimation and show the quality of both irisimages captured at various distances.

March 2009/Vol. 48�3�3

Page 14: Nonintrusive iris image acquisition system based on … Gi Jung Homepage...Nonintrusive iris image acquisition system based on a pan-tilt-zoom camera and light stripe projection Soweon

tm1emhdnatspmf

npdmitactitfnPclsefbotp

qecectuc

ATnn

Yoon et al.: Nonintrusive iris image acquisition system…

O

ems. Acceptability is achieved in terms of user position,ovement, and time required. A large capture volume of

20 deg �width��1 m �height��1.5 m �depth� enables us-rs to pay less attention to positioning at a distance andakes the proposed system applicable to users of various

eights. A high-resolution PTZ camera and a sufficientepth of field result in the advantage that users can standaturally while the iris images are captured. Both iris im-ges are successfully cropped from full-face images cap-ured by the high-resolution camera when the user moveslightly to the left and right. The depth of field of the pro-osed system shows that it is tolerant to back-and-forthovements. It takes an average of 2.5 s to capture the in-

ocus iris images.Accessibility is achieved by estimating the face coordi-

ates based on real-time detection of the user’s horizontalosition using light stripe projection and by holding enoughepth of focus. The horizontal position of the user deter-ines the pan angle exactly and helps the face be detected

n a 1-D vertical line. Then the estimated distance betweenhe PTZ camera and the face determines the initial zoomnd focus lens positions with high accuracy. Since the faceoordinate information reduces most parts of the PTZ con-rol from searching and optimization problems to determin-stic ones, PTZ control is performed quickly. In addition,he accuracy of the initial focus lens position contributes toast focus refinement and a sufficient depth of focus elimi-ates a need for an elaborate focus refinement algorithm.roposed system has the following three contributionsompared with previous works: �1� the capture volume isargely increased by using a PTZ camera guided by a lighttripe projection, �2� the PTZ camera can track a user’s faceasily in the large capture volume based on 1-D verticalace searching from the user’s horizontal position obtainedy the light stripe projection, and �3� zooming and focusingn the user’s irises at a distance are accurate and fast usinghe estimated 3-D position of a face by the light striperojection and the PTZ camera.

For further research, efficient illumination control is re-uired. Because the combination of a high-resolution cam-ra and a lens with a large f-number reduced the total in-ident energy of light, we used bulky illuminators, whichmitted NIR light continuously. Instead, low-power syn-hronized flash illuminators can be one of the solutions. Inhe future, the proposed system will be applied to movingsers by solving the degradation of iris image quality thatan occur with motion blurring.

cknowledgmentshis work was supported by the Korea Science and Engi-eering Foundation �KOSEF� through the Biometrics Engi-eering Research Center �BERC� at Yonsei University.

Table 1 Time required for each stage and avera

TiltInitan

Average time 0.857

ptical Engineering 037202-1

References

1. J. Wayman, A. Jain, D. Maltoni, D. Maio, Eds., Biometric Systems:Technology, Design and Performance Evaluation, Springer, London,�2005�.

2. R. P. Wildes, “Iris recognition: an emerging biometric technology,”Proc. IEEE 85�9�, 1348–1363 �1997�.

3. J. Daugman, “Statistical richness of visual phase information: updateon recognizing persons by iris patterns,” Int. J. Comput. Vis. 45�1�,25–38 �2001�.

4. J. Daugman and C. Downing, “Epigenetic randomness, complexityand singularity of human iris patterns,” Proc. R. Soc. London, Ser. B268�1477�, 1737–1740 �2001�.

5. J. L. Wayman, “Fundamentals of biometric authentication technolo-gies,” Int. J. Image Graph. 1�1�, 93–113 �2001�.

6. Atos Origin, “UK passport service biometrics enrolment trial: re-port,” �2005�; �Online� available at http://www.ips.gov.uk/passport/downloads/UKPSBiometrics-Enrolment-Trial-Report.pdf �accessedon Dec. 31, 2008�.

7. IrisAccess™ 3000, LG �Online� available at http://www.lgiris.com/ps/products/previousmodels.htm �accessed on Dec. 31, 2008�.

8. BM-ET300, Panasonic �Online�, available at http://catalog2.panasonic.com/webapp/wcs/stores/servlet/ModelDetail?displayTabO&storeId11201&catalogId13051&itemId67115&catGroupId16817&surfModelBM-ET300 �accessed on Dec. 31, 2008�.

9. J. R. Matey, O. Naroditsky, K. Hanna, R. Kolczynski, D. J. Loiacono,S. Mangru, M. Tinker, T. M. Zappia, and W. Y. Zhao, “Iris on themove: acquisition of images for iris recognition in less constrainedenvironments,” Proc. IEEE 94�11�, 1936–1947 �2006�.

10. IrisPass-M, OKI �Online�, available at http://www.oki.com/en/iris/�accessed on Dec. 31, 2008�.

11. U. M. Cahn von Seelen, T. Camus, P. L. Venetianer, G. G. Zhang, M.Salganicoff, and M. Negin, “Active vision as an enabling technologyfor user-friendly iris identification,” in Proc. 2nd IEEE Workshop onAutomatic Identification Advanced Technologies, pp. 169–172 �1999�.

12. G. Guo, M. J. Jones, P. Beardsley, “A System for automatic iriscapturing,” Mitsubishi Electric Research Laboratories, TR2005-044�2005� �Online� available at http://www.merl.com/publications/TR2005-044/ �accessed on Dec. 31, 2008�.

13. F. Bashir, P. Casaverde, D. Usher, and M. Friedman, “Eagle-Eyes™:a system for iris recognition at a distance,” in Proc. IEEE Conf. Proc.on Technologies for Homeland Security, pp. 426–431 �2008�.

14. IOM Drive-Through System, Sarnoff Corporation �Online�, availableat http://www.sarnoff.com/products/iris-on-the-move �accessed onDec. 31, 2008�.

15. AOptix �Online�, available at http://www.aoptix.com/biometrics.html�accessed on Dec. 31, 2008�.

16. ANSI INCITS 379-2004: Iris Image Interchange Format.17. AF Zoom-Nikkor 70–300 mm f/4-5.6D ED �4.3� �, Nikon �Online�

available at http://nikonimaging.com/global/products/lens/af/zoom/af_zoom70-300mmf_4-56d/index.htm �accessed on Dec. 31, 2008�.

18. SVS4020, SVS-VISTEK �Online� available at http://www.svsvistek.com/camera/svcam/SVCAM%20GigEVision/svs_gige_line.php �accessed on Dec. 31, 2008�.

19. J. Daugman, “Probing the uniqueness and randomness of iriscodes:results from 200 billion iris pair comparisons,” Proc. IEEE 94�11�,1927–1935 �2006�.

20. E. Hecht, Optics, 4th ed., Addison Wesley, San Francisco, CA �2002�.21. R. Klette, K. Schluns, and A. Koschan, Computer Vision: Three-

Dimensional Data from Images, Springer �1998�.22. H. G. Jung, Y. H. Lee, P. J. Yoon, and J. Kim, “Radial distortion

refinement by inverse mapping-based extrapolation,” in Proc. IAPRInt. Conf. on Pattern Recognition, pp. 675–678 �2006�.

23. H. G. Jung, P. J. Yoon, and J. Kim, “Light stripe projection basedparking space detection for intelligent parking assist system,” inProc. IEEE Intelligent Vehicle Symp., pp. 962–968 �2007�.

24. S. Yoon, H. G. Jung, J. K. Suhr, and J. Kim, “Non-intrusive irisimage capturing system using light stripe projection and pan-tilt-zoom camera,” in Proc. IEEE Computer Society Workshop on Bio-metrics in association with CVPR07, pp. 1–7 �June 18, 2007�.

25. P. Viola and M. J. Jones, “Robust real-time face detection,” Int. J.

e to obtain feasible iris images �unit: seconds�.

mingsing

FocusRefinement Total

1.183 2.479

ge tim

ial Zood Focu

0.438

March 2009/Vol. 48�3�4

Page 15: Nonintrusive iris image acquisition system based on … Gi Jung Homepage...Nonintrusive iris image acquisition system based on a pan-tilt-zoom camera and light stripe projection Soweon

2

2

2

2

3

thgaas

Yoon et al.: Nonintrusive iris image acquisition system…

O

Comput. Vis. 57�2�, 137–154 �2004�.6. J. Daugman, “How iris recognition works,” IEEE Trans. Circuits

Syst. Video Technol. 14�1�, 21–30 �2004�.7. A. Papoulis and S. U. Pillai, Probability, Random Variables, and

Stochastic Processes, 4th ed., McGraw-Hill, New York �2002�.8. M. Subbarao and J. Tyan, “Selecting the optimal focus measure for

autofocusing and depth-from-focus,” IEEE Trans. Pattern Anal.Mach. Intell. 20�8�, 864–870 �1998�.

9. S. Yoon, K. Bae, K. R. Park, and J. Kim, “Pan-tilt-zoom based irisimage capturing system for unconstrained user environments at adistance,” in Proc. 2nd int. Conf. on Biometrics, Lecture Notes inComputer Science, Vol. 4642, pp. 653–663, Springer, Berlin �2007�.

0. OpenCV �Online�, available at http://sourceforge.net/projects/opencvlibrary/ �accessed on Dec. 31, 2008�.

Soweon Yoon received her BS and MS de-grees from the School of Electrical andElectronic Engineering, Yonsei University,Seoul, Korea, in 2006 and 2008, respec-tively. She is currently a PhD student withthe Department of Computer Science andEngineering, Michigan State University. Herresearch interests include pattern recogni-tion, image processing, and computer visionfor biometrics.

Ho Gi Jung received his BS, MS, and PhDdegrees in electronic engineering from theYonsei University, Seoul, Korea, in 1995,1997, and 2008, respectively. He has beenwith the Mando Corporation Global R&DH.Q. since 1997. He developed environ-ment recognition algorithms for a lane de-parture warning system �LDWS� and adap-tive cruise control �ACC� from 1997 to 2000.He developed an electronic control unit�ECU� and embedded software for a elec-

rohydraulic braking �EHB� system from 2000 to 2004. Since 2004,e has developed environment recognition algorithms for an intelli-ent parking assist system �IPAS�, collision warning and avoidance,nd an active pedestrian protection system �APPS�. His interestsre automotive vision, embedded software development, driver as-istant systems �DASs�, and active safety vehicles �ASVs�.

ptical Engineering 037202-1

Kang Ryoung Park received his BS andMS degrees in electronic engineering fromYonsei University, Seoul, Korea, in 1994and 1996, respectively, and his PhD degreein computer vision from the Department ofElectrical and Computer Engineering, Yon-sei University, in 2000. He was an assistantprofessor with the Division of Digital MediaTechnology, Sangmyung University, fromMarch 2003 to February 2008 and sinceMarch 2008 he has been an assistant pro-

fessor with the Department of Electronics Engineering, DonggukUniversity. He also has been a research member of BERC �Biomet-rics Engineering Research Center�. His research interests includethe computer vision, image processing, and biometrics.

Jaihie Kim received his BS degree in elec-tronic engineering from Yonsei University,Seoul, Korea, in 1979 and his MS degree indata structures and his PhD degree in arti-ficial intelligence from Case Western Re-serve University, Cleveland, Ohio, in 1982and 1984, respectively. Since 1984, he hasbeen a professor with the School of Electri-cal and Electronic Engineering, Yonsei Uni-versity. He currently directs the BiometricEngineering Research Center in Korea. His

research areas include biometrics, computer vision, and pattern rec-ognition. Prof. Kim currently chairs the Korean Biometric Associa-tion.

March 2009/Vol. 48�3�5