5
Periocular Based Verification on Legacy Images AbstractI. INTRODUCTION A. Related Work Periocular based recognition gained its attention for the past few years as it offers information about the shape of the eye, which can be used as a soft biometric in situations where the face region is occluded in images or where the iris cannot be reliably obtained [6], [3], and [4]. B. contribution We list our contributions as follows; 1) Establishment of baseline for periocular based verifica- tion against longitudinal dataset. The MORPH Album1 [24] dataset is utilized for this purpose. 2) Analysis of the sensitivity of the periocular region against template aging, gender, and ethnicity. II. HIERARCHICAL THREE-PATCH LBP Given a pair of images, it is necessary to learn the similarities between the image pairs for the purpose of verification. This corresponds to finding a matching ensemble of descriptors that are similar in their descriptive values at relative geometric positions. Patch-based techniques have been shown with the capabilities to learn the similarities between two face images [13], [14]. Patch-based computation of texture patterns allows to encode the similarities between neighboring patches of pixels, thus possibly capturing in- formation which is complementary to that of pixel-based descriptors. The patch-based textures treats colored regions, edges, lines and complex textures in a unified way unlike pixel based techniques. Hence we propose a Hierarchical 3- Patch LBP (H-3P-LBP) to compute the feature descriptors for each face image. The hierarchical computation extracts both micro and macro structures, both of which is required for texture discrimination [26]. It can be argued that a multi-scale 3P- LBP can be computed by varying the radius and the number of sampling points. However, the stability of 3P-LBP reduces with increases in the radii of the neighborhood. This is due to the minimal correlation of the sampling points with the center pixel. Further, from a signal processing point of view, the sparse sampling adapted by the LBP operators from a large neighborhood radii may not result in an adequate representation of the two-dimensional image signal. Also, aliasing effects are an obvious problem. A. Hierarchical Three-Patch LBP The H-3P-LBP is an extension to the 3P-LBP operator [15] in which 3P-LBP descriptors are computed at different scales of the image. The 3P-LBP of a pixel is computed by comparing the values of three patches to produce a single bit value in the code assigned to the pixel. 3P-LBP for each pixel is computed by considering a window of region centered on the pixel and considering m sampling points within a radius of r pixels. Unlike LBP, the 3P-LBP approach considers m patches around m neighboring pixels that are distributed uniformly around the center patch. The 3P-LBP code is computed by comparing the value of the center patch with a pair of patches that are α-patches apart along the circle. The value of a single bit is set according to the similarity of the two patches with the center patch. The resulting code has m bits per pixel. The 3P-LBP is given by the following equation: 3P - LBP r,m,w,α (p)= S X i (f (d(C i ,C p )- d(C i+α mod m ,C p ))2 i (1) where C i and C i+α are two patches along the ring and C p is the central patch. The function d(., .) is any distance function between two patches (e.g., L 2 norm of their gray level differences) and f is defined as: f (x)= 1 if x τ 0 if x<τ (2) where τ is set to a value slightly larger than zero in order to provide stability in uniform regions [14]. For a given image I , a Gaussian pyramid with s levels is constructed to form the multi-scale representation of the image. The H-3P-LBP descriptor is computed by applying the 3P-LBP operator at each level of the image pyramid. The final H-3P-LBP descriptor H(I ) is obtained by combining the 3P-LBP descriptors from each level into a final feature matrix. The hierarchical 3P-LBP, H I maps the image I into a R d×s representation, where d is the length of the cumulative 3P-LBP descriptor obtained from the image at s different scales. B. Kernel Between H-3P-LBPs Given an image pair (I i ,I j ) and their corresponding hierarchical 3P-LBP descriptors H(I i ) and H(I j ), the final feature descriptor x describing the similarity between the images is given by

Sample New

Embed Size (px)

DESCRIPTION

sample new

Citation preview

Page 1: Sample New

Periocular Based Verification on Legacy Images

Abstract—

I. INTRODUCTION

A. Related Work

Periocular based recognition gained its attention for thepast few years as it offers information about the shape ofthe eye, which can be used as a soft biometric in situationswhere the face region is occluded in images or where theiris cannot be reliably obtained [6], [3], and [4].

B. contribution

We list our contributions as follows;

1) Establishment of baseline for periocular based verifica-tion against longitudinal dataset. The MORPH Album1[24] dataset is utilized for this purpose.

2) Analysis of the sensitivity of the periocular regionagainst template aging, gender, and ethnicity.

II. HIERARCHICAL THREE-PATCH LBP

Given a pair of images, it is necessary to learn thesimilarities between the image pairs for the purpose ofverification. This corresponds to finding a matching ensembleof descriptors that are similar in their descriptive valuesat relative geometric positions. Patch-based techniques havebeen shown with the capabilities to learn the similaritiesbetween two face images [13], [14]. Patch-based computationof texture patterns allows to encode the similarities betweenneighboring patches of pixels, thus possibly capturing in-formation which is complementary to that of pixel-baseddescriptors. The patch-based textures treats colored regions,edges, lines and complex textures in a unified way unlikepixel based techniques. Hence we propose a Hierarchical 3-Patch LBP (H-3P-LBP) to compute the feature descriptorsfor each face image.

The hierarchical computation extracts both micro andmacro structures, both of which is required for texturediscrimination [26]. It can be argued that a multi-scale 3P-LBP can be computed by varying the radius and the numberof sampling points. However, the stability of 3P-LBP reduceswith increases in the radii of the neighborhood. This is dueto the minimal correlation of the sampling points with thecenter pixel. Further, from a signal processing point of view,the sparse sampling adapted by the LBP operators froma large neighborhood radii may not result in an adequaterepresentation of the two-dimensional image signal. Also,aliasing effects are an obvious problem.

A. Hierarchical Three-Patch LBP

The H-3P-LBP is an extension to the 3P-LBP operator[15] in which 3P-LBP descriptors are computed at differentscales of the image. The 3P-LBP of a pixel is computed bycomparing the values of three patches to produce a single bitvalue in the code assigned to the pixel. 3P-LBP for each pixelis computed by considering a window of region centered onthe pixel and considering m sampling points within a radiusof r pixels. Unlike LBP, the 3P-LBP approach considersm patches around m neighboring pixels that are distributeduniformly around the center patch. The 3P-LBP code iscomputed by comparing the value of the center patch with apair of patches that are α-patches apart along the circle. Thevalue of a single bit is set according to the similarity of thetwo patches with the center patch. The resulting code has mbits per pixel.

The 3P-LBP is given by the following equation:

3P − LBPr,m,w,α(p) =S∑i

(f(d(Ci, Cp)−

d(Ci+α mod m, Cp))2i

(1)

where Ci and Ci+α are two patches along the ring andCp is the central patch. The function d(., .) is any distancefunction between two patches (e.g., L2 norm of their graylevel differences) and f is defined as:

f(x) =

{1 if x ≥ τ0 if x < τ

(2)

where τ is set to a value slightly larger than zero in orderto provide stability in uniform regions [14].

For a given image I , a Gaussian pyramid with s levelsis constructed to form the multi-scale representation of theimage. The H-3P-LBP descriptor is computed by applyingthe 3P-LBP operator at each level of the image pyramid. Thefinal H-3P-LBP descriptor H(I) is obtained by combiningthe 3P-LBP descriptors from each level into a final featurematrix. The hierarchical 3P-LBP, HI maps the image I into aRd×s representation, where d is the length of the cumulative3P-LBP descriptor obtained from the image at s differentscales.

B. Kernel Between H-3P-LBPs

Given an image pair (Ii, Ij) and their correspondinghierarchical 3P-LBP descriptors H(Ii) and H(Ij), the finalfeature descriptor x describing the similarity between theimages is given by

Page 2: Sample New

CpC0

C1

C2

C3

C4

C5C6

C7

w

w

(a) Three patch LBP

CpC0

C1

C2

C3

C4

C5C6

C7

w

w

3P − LBPr,S,3,2 (p) =

f (d(C0,Cp )− d(C2,Cp ))20 +

f (d(C1,Cp )− d(C3,Cp ))21 +

f (d(C2,Cp )− d(C4,Cp ))22 +

f (d(C3,Cp )− d(C5,Cp ))23 +

f (d(C4,Cp )− d(C6,Cp ))24 +

f (d(C5,Cp )− d(C7,Cp ))25 +

f (d(C6,Cp )− d(C0,Cp ))26 +

f (d(C7,Cp )− d(C1,Cp ))27

(b) Three patch LBPcode computation

Fig. 1. Figures (a) and (b) show the compuation of the 3-patch LBP.

Image    alignment  

cropping  

Fig. 2. Extraction of the periocular region from the face image.

x = S(Ii, Ij) (3)

where S is defined as the inner product (denoted by �)between the 3P-LBP descriptors of the image Ii and Ij , andis given by

x = S(Ii, Ij) = (H(Ii)�H(Ij))

1...1

s×1

(4)

III. CLASSIFICATION FRAMEWORK

As in [17], [18], [19], [16], we model the verificationtask as a two-class classification problem. Periocular basedverification is a multi-class problem which can be convertedto a two-class problem by classifying the image pairs asintra-personal or extra-personal. Given two images Ii and Ij ,the task is reduced to classifying this image pair as eitherintra-personal or extra-personal. A feature vector x ∈ Rd isobtained by mapping the image pair into a feature space, andis given by equation 4.

The Adaboost is used to classify the feature vectors asbelonging to intra-personal pairs and extra-personal pairs.Adaboost introduced by Yoav et al. [25] is a strong toolto solve a two-class classification problem. The Adaboostclassifier learns a strong classifier by selecting a set of weakclassifiers for every iteration from the training data. The finalstrong classifier is given by,

H(x) = sign(∑

αtht(x)), (5)

where H(x) is the strong classifier, αt ∈ < and αt =12 ln

1−εtεt

where εt is the weighted error rate of weak classi-fier ht. The final strong classifier represents the boundary inthe feature space that separates the intra-personal and extra-personal pairs. In our experiments, we use the GMLAdaboostlibrary [20].

Image    alignment  

cropping  

Fig. 3. Sample images from MORPH Album1 database showing variationsin pose, illumination, image artifacts, etc.

IV. EXPERIMENTS

A. Database

Album1 of the MORPH database [24] is used in our ex-periments. MORPH Album1 consists of scanned photographsof 1, 690 images from 515 individuals taken over an intervalof time. The age of the images range from 15 to 68 yearswith the age gap between the first image and the last image(w.r.t. age) ranging from 46 days to 29 years. In general,the MORPH Album1 does not include images of children.Further, the dataset includes nearly 6−times the number ofsubjects of the FG-NET aging database [23]. The face imagesof Album1 are frontal or near frontal images which makesit suitable for our experiments in analyzing the effects ofother factors such as age, gender, ethnicity, etc. on periocularverification. The database is also publicly available whichallows for more evaluations on this database. Figure 3 showssome sample images from the MORPH Album1 database.

B. Preprocessing

We follow the procedure similar to [21] for aligningthe images based on their eye-coordinates. The procedureinvolves scaling, rotation, and cropping the face region toa specified size. The face image is aligned such that theeye centers are horizontally aligned and placed on standardpixel locations. The left and right periocular regions arethen extracted using the eye-coordinates and are resized to128 × 256 pixels. The extracted periocular region includesthe eyes, the region around the eyes, and the eyebrow. Imagenoise is sharpened by mere use of a Weiner filter. It is tobe noted that we do not perform illumination regularizationusing histogram equalization as this changes the texture ofthe face image and also due to the fact that LBP is invariantto illumination changes. Figure 2 outlines the extraction ofperiocular region from the face image.

C. Experimental Evaluation

We perform 5-fold cross validation experiments to studythe effects of aging, gender, and ethnicity. The folds aremutually exclusive on subjects. In other words, no image ofa subject exists both in training and testing set. This ensuresthat the testing set contains images that has not been used intraining the classifier. Also, each fold consists of near equalnumber of intra-personal and extra-personal pairs to avoidbiasing the classifier. The performance is measured in termsof the TAR-TRR curves (a.k.a. ROC curves), and also theEqual Error Rate (EER) which is defined as the value whenTAR equals TRR. The TAR and TRR are defined as follows;

Page 3: Sample New

0 0.2 0.4 0.6 0.8 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

True Acceptance Rate (TAR)

True

Rej

ectio

n R

ate

(TR

R)

H−3P−LBP With AlignmentH−3P−LBP Without Alignment3P−LBP With Alignment3P−LBP Without AlignmentLBP With AlignmentLBP Without AlignmentHOG With AlignmentHOG Without Alignment

Fig. 4. ROC curves showing the baseline performance of HOG, LBP,3P-LBP, and H-3P-LBP for left periocular region.

0 0.2 0.4 0.6 0.8 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

True Acceptance Rate (TAR)

True

Rej

ectio

n R

ate

(TR

R)

H−3P−LBP With AlignmentH−3P−LBP Without Alignment3P−LBP With Alignment3P−LBP Without AlignmentLBP With AlignmentLBP Without AlignmentHOG With AlignmentHOG Without Alignment

Fig. 5. ROC curves showing the baseline performance of HOG, LBP,3P-LBP, and H-3P-LBP for right periocular region.

TAR =#true positive samples#total positive samples

(6)

TRR =#true negative samples#total negative samples

(7)

In order to illustrate the effectiveness of the proposed H-3P-LBP feature descriptor, we compare it with three well-known descriptors: histogram of gradient orientation (HOG),Uniform Local Binary Pattern (LBP), and 3-patch LBP (3P-LBP). Each feature descriptor is individually combined withthe AdaBoost classification framework to perform periocularverification.

D. Baseline Periocular Verification

To evaluate the baseline performance of the proposed ap-proach, a total of 3, 226 intra-personal pairs and equal num-ber of randomly selected extra-personal pairs were generatedfrom the entire MORPH Album1 dataset. The approacheswere individually tested on both left and right periocularregions. Figures 4 and 5 shows the ROC curves for both leftand right periocular regions (with and without alignment)from different approaches. Table I shows the average EERobtained for all the descriptors.

It is to be noted that the patch based computation of LBPprovides better description of the texture patterns when com-pared with the traditional LBP. The hierarchical computationallows for further improvement in performance, which can

TABLE IAVG. EER FROM HOG, LBP, 3P-LBP, AND H-3P-LBP APPROACHES

OBTAINED BY 5-FOLD CROSS VALIDATION ON MORPH ALBUM1.

Without Alignment With AlignmentLeft Right Left Right

HOG 0.2679 0.2676 0.2669 0.2539LBP 0.2892 0.2977 0.2740 0.2616

3P-LBP 0.2527 0.2313 0.2071 0.1878H-3P-LBP 0.2345 0.2283 0.2000 0.1609

be observed from the EERs in table I. However, it can beobserved that the 3P-LBP and H-3P-LBP are sensitive toimage alignment when compared with uniform LBP, whichis rotation invariant unlike 3P-LBP and H-3P-LBP. Further,we do not mask the eyeball region during the preprocessingstage. The eyeball region is a non-rigid texture that keepsmoving with the movement of the eyeball during imagecapture, thus affecting the verification process.

E. Effects of Aging

In order to illustrate the effects of aging on verificationperformance and the sensitivity of the periocular region tothe aging effects, experiments were conducted on the subsetsof images that were grouped based on their 1. age gap (0-2 years, 3-5 years, 6-8 years, 9-11 years, > 11 years), and2. age group (0-18 years, 18-29 years, 30-39 years, 40-49years, ≥ 50 years). The classifier is trained using image pairsfrom each subset individually and tested on image pairs fromthe remaining subsets. Figure 6 shows the average EERsobtained for image pairs from each age gap group and Figure7 shows the average EERs obtained for image pairs fromeach age group.

From the figure 6, it is to be noted that the error rateincreases as the age gap between the images increases. Thisindicates the reduction of similarity between the periocularregion of the same subject at different ages. This is evidentfrom the performance of all the feature descriptors for agegaps greater than 2 years when compared with their perfor-mance for age gaps less than or equal to 2 years. Further,it can be seen that the H-3P-LBP and 3P-LBP provide nearequal performance as the age gap increases illustrating theinsignificance of the hierarchical computation of 3P-LBP.This is again attributed by the less similar features availablebetween the image pairs.

There are a couple of observations from the figure 7 thatshows the performance of the classifier trained with variousdescriptors from various age groups individually. First, it canbe seen that the best performance is achieved from testingon image pairs from age group 0−18 years, when comparedwith other age groups. One reasoning is that there is minimaltexture changes in the periocular region for this age group.Second, the error rate is much higher for testing on age group18 − 29 years, which possibly indicates the major texturechanges that occur in the periocular region for this age group.Also, a similar performance is observed for the age group≥ 50 years. Periocular anatomy indicates that the skin in

Page 4: Sample New

0  0.05  0.1  

0.15  0.2  

0.25  0.3  

0.35  0.4  0.45  

[0-­‐2]   [3-­‐5]   [6-­‐8]   [9-­‐11]   >11  

Average  EER  

Training  Set  Age  Gap  

H-­‐3P-­‐LBP   3P-­‐LBP   HOG  

0  0.05  0.1  

0.15  0.2  

0.25  0.3  

0.35  0.4  0.45  0.5  

[0-­‐2]   [3-­‐5]   [6-­‐8]   [9-­‐11]   >11  

Average  EER  

Training  Set  Age  Gap  

H-­‐3P-­‐LBP   3P-­‐LBP   HOG  

0  0.05  0.1  

0.15  0.2  

0.25  0.3  

0.35  0.4  0.45  

[0-­‐2]   [3-­‐5]   [6-­‐8]   [9-­‐11]   >11  

Average  EER  

Training  Set  Age  Gap  

H-­‐3P-­‐LBP   3P-­‐LBP   HOG  

0  0.05  0.1  

0.15  0.2  

0.25  0.3  

0.35  0.4  0.45  

[0-­‐2]   [3-­‐5]   [6-­‐8]   [9-­‐11]   >11  Av

erage  EER  

Training  Set  Age  Gap  

H-­‐3P-­‐LBP   3P-­‐LBP   HOG  (a) 0-2 years

0  0.05  0.1  

0.15  0.2  

0.25  0.3  

0.35  0.4  0.45  

[0-­‐2]   [3-­‐5]   [6-­‐8]   [9-­‐11]   >11  

Average  EER  

Training  Set  Age  Gap  

H-­‐3P-­‐LBP   3P-­‐LBP   HOG  

0  0.05  0.1  

0.15  0.2  

0.25  0.3  

0.35  0.4  0.45  0.5  

[0-­‐2]   [3-­‐5]   [6-­‐8]   [9-­‐11]   >11  

Average  EER  

Training  Set  Age  Gap  

H-­‐3P-­‐LBP   3P-­‐LBP   HOG  

0  0.05  0.1  

0.15  0.2  

0.25  0.3  

0.35  0.4  0.45  

[0-­‐2]   [3-­‐5]   [6-­‐8]   [9-­‐11]   >11  

Average  EER  

Training  Set  Age  Gap  

H-­‐3P-­‐LBP   3P-­‐LBP   HOG  

0  0.05  0.1  

0.15  0.2  

0.25  0.3  

0.35  0.4  0.45  

[0-­‐2]   [3-­‐5]   [6-­‐8]   [9-­‐11]   >11  

Average  EER  

Training  Set  Age  Gap  

H-­‐3P-­‐LBP   3P-­‐LBP   HOG  (b) 3-5 years

0  0.05  0.1  

0.15  0.2  

0.25  0.3  

0.35  0.4  0.45  

[0-­‐2]   [3-­‐5]   [6-­‐8]   [9-­‐11]   >11  

Average  EER  

Training  Set  Age  Gap  

H-­‐3P-­‐LBP   3P-­‐LBP   HOG  

0  0.05  0.1  

0.15  0.2  

0.25  0.3  

0.35  0.4  0.45  0.5  

[0-­‐2]   [3-­‐5]   [6-­‐8]   [9-­‐11]   >11  

Average  EER  

Training  Set  Age  Gap  

H-­‐3P-­‐LBP   3P-­‐LBP   HOG  

0  0.05  0.1  

0.15  0.2  

0.25  0.3  

0.35  0.4  0.45  

[0-­‐2]   [3-­‐5]   [6-­‐8]   [9-­‐11]   >11  

Average  EER  

Training  Set  Age  Gap  

H-­‐3P-­‐LBP   3P-­‐LBP   HOG  

0  0.05  0.1  

0.15  0.2  

0.25  0.3  

0.35  0.4  0.45  

[0-­‐2]   [3-­‐5]   [6-­‐8]   [9-­‐11]   >11  

Average  EER  

Training  Set  Age  Gap  

H-­‐3P-­‐LBP   3P-­‐LBP   HOG  

(c) 6-8 years

0  0.05  0.1  

0.15  0.2  

0.25  0.3  

0.35  0.4  0.45  

[0-­‐2]   [3-­‐5]   [6-­‐8]   [9-­‐11]   >11  

Average  EER  

Training  Set  Age  Gap  

H-­‐3P-­‐LBP   3P-­‐LBP   HOG  

0  0.05  0.1  

0.15  0.2  

0.25  0.3  

0.35  0.4  0.45  0.5  

[0-­‐2]   [3-­‐5]   [6-­‐8]   [9-­‐11]   >11  

Average  EER  

Training  Set  Age  Gap  

H-­‐3P-­‐LBP   3P-­‐LBP   HOG  

0  0.05  0.1  

0.15  0.2  

0.25  0.3  

0.35  0.4  0.45  

[0-­‐2]   [3-­‐5]   [6-­‐8]   [9-­‐11]   >11  

Average  EER  

Training  Set  Age  Gap  

H-­‐3P-­‐LBP   3P-­‐LBP   HOG  

0  0.05  0.1  

0.15  0.2  

0.25  0.3  

0.35  0.4  0.45  

[0-­‐2]   [3-­‐5]   [6-­‐8]   [9-­‐11]   >11  

Average  EER  

Training  Set  Age  Gap  

H-­‐3P-­‐LBP   3P-­‐LBP   HOG  

(d) 9-11 years

0  0.05  0.1  

0.15  0.2  

0.25  0.3  

0.35  0.4  0.45  0.5  

[0-­‐2]   [3-­‐5]   [6-­‐8]   [9-­‐11]   >11  

Average  EER  

Training  Set  Age  Gap  

H-­‐3P-­‐LBP   3P-­‐LBP   HOG  

(e) > 11 years

Fig. 6. Figures (a), (b), (c), (d), and (e) show the avg. EER from training on various age gaps and testing on age gaps [0-2], [3-5], [6-8], [9-11], and> 11, respectively.

0  0.05  0.1  

0.15  0.2  

0.25  0.3  

0.35  0.4  0.45  0.5  

[0-­‐2]   [3-­‐5]   [6-­‐8]   [9-­‐11]   >11  

Average  EER  

Training  Set  Age  Gap  

H-­‐3P-­‐LBP   3P-­‐LBP   HOG  

0  

0.05  

0.1  

0.15  

0.2  

0.25  

0.3  

0-­‐18   18-­‐29   30-­‐39   40-­‐49   >=50  

Average  EER  

Age  Group  of  Training  Set  

H-­‐3P-­‐LBP   3P-­‐LBP   HOG  

0  

0.05  

0.1  

0.15  

0.2  

0.25  

0.3  

0.35  

0-­‐18   18-­‐29   30-­‐39   40-­‐49   >=50  

Average  EER  

Age  Group  of  Training  Set  

H-­‐3P-­‐LBP   3P-­‐LBP   HOG  

(a) 0-18 years

0  0.05  0.1  

0.15  0.2  

0.25  0.3  

0.35  0.4  0.45  0.5  

[0-­‐2]   [3-­‐5]   [6-­‐8]   [9-­‐11]   >11  

Average  EER  

Training  Set  Age  Gap  

H-­‐3P-­‐LBP   3P-­‐LBP   HOG  

0  

0.05  

0.1  

0.15  

0.2  

0.25  

0.3  

0-­‐18   18-­‐29   30-­‐39   40-­‐49   >=50  

Average  EER  

Age  Group  of  Training  Set  

H-­‐3P-­‐LBP   3P-­‐LBP   HOG  

0  

0.05  

0.1  

0.15  

0.2  

0.25  

0.3  

0.35  

0-­‐18   18-­‐29   30-­‐39   40-­‐49   >=50  

Average  EER  

Age  Group  of  Training  Set  

H-­‐3P-­‐LBP   3P-­‐LBP   HOG  

(b) 18-29 years

0  

0.05  

0.1  

0.15  

0.2  

0.25  

0.3  

0.35  

0-­‐18   18-­‐29   30-­‐39   40-­‐49   >=50  

Average  EER  

Age  Group  of  Training  Set  

H-­‐3P-­‐LBP   3P-­‐LBP   HOG  

0  

0.05  

0.1  

0.15  

0.2  

0.25  

0.3  

0.35  

0-­‐18   18-­‐29   30-­‐39   40-­‐49   >=50  

Average  EER  

Age  Group  of  Training  Set  

H-­‐3P-­‐LBP   3P-­‐LBP   HOG  

0  

0.05  

0.1  

0.15  

0.2  

0.25  

0.3  

0.35  

0.4  

0-­‐18   18-­‐29   30-­‐39   40-­‐49   >=50  

Average  EER  

Age  Group  of  Training  Set  

H-­‐3P-­‐LBP   3P-­‐LBP   HOG  

(c) 30-39 years

0  

0.05  

0.1  

0.15  

0.2  

0.25  

0.3  

0.35  

0-­‐18   18-­‐29   30-­‐39   40-­‐49   >=50  

Average  EER  

Age  Group  of  Training  Set  

H-­‐3P-­‐LBP   3P-­‐LBP   HOG  

0  

0.05  

0.1  

0.15  

0.2  

0.25  

0.3  

0.35  

0-­‐18   18-­‐29   30-­‐39   40-­‐49   >=50  

Average  EER  

Age  Group  of  Training  Set  

H-­‐3P-­‐LBP   3P-­‐LBP   HOG  

0  

0.05  

0.1  

0.15  

0.2  

0.25  

0.3  

0.35  

0.4  

0-­‐18   18-­‐29   30-­‐39   40-­‐49   >=50  

Average  EER  

Age  Group  of  Training  Set  

H-­‐3P-­‐LBP   3P-­‐LBP   HOG  

(d) 40-49 years

0 25

0.3

0.35

R

H‐3P‐LBP 3P‐LBP HOG

0 25

0.3

0.35

R

H‐3P‐LBP 3P‐LBP HOG

0.1

0.15

0.2

0.25

Average EE

0.1

0.15

0.2

0.25

Average EER

0

0.05

0‐18 18‐29 30‐39 40‐49 >=50

Age Group of Training Set

0

0.05

0‐18 18‐29 30‐39 40‐49 >=50Age Group of Training Set

0.35

0.4

H‐3P‐LBP 3P‐LBP HOG0 3

0.35

0.4

H‐3P‐LBP 3P‐LBP HOG

0.15

0.2

0.25

0.3

Average EER

0 1

0.15

0.2

0.25

0.3

Average EER

0

0.05

0.1

0‐18 18‐29 30‐39 40‐49 >=50

A

Age Group of Training Set

0

0.05

0.1

0‐18 18‐29 30‐39 40‐49 >=50

A

Age Group of Training Set Age Group of Training SetAge Group of Training Set

(e) ≥ 50 years

Fig. 7. Figures (a), (b), (c), (d), and (e) show the avg. EER from training on various age groups and testing on age groups 0-18, 18-29, 30-39, 40-49,and ≥ 50, respectively.

0  

0.05  

0.1  

0.15  

0.2  

0.25  

0.3  

Black   White  

Average  EER

 

Training  Set  Ethnicity  

H-­‐3P-­‐LBP   3P-­‐LBP  

0  

0.05  

0.1  

0.15  

0.2  

0.25  

0.3  

Black   White  

Average  EER

 

Training  Set  Ethnicity  

H-­‐3P-­‐LBP   3P-­‐LBP   HOG  

0  

0.05  

0.1  

0.15  

0.2  

0.25  

0.3  

Female   Male   Both  

Average  EER

 

Training  Set  Gender  

H-­‐3P-­‐LBP   3P-­‐LBP   HOG  

0  

0.05  

0.1  

0.15  

0.2  

0.25  

0.3  

Female   Male   Both  

Average  EER

 

Training  Set  Gender  

H-­‐3P-­‐LBP   3P-­‐LBP   HOG  

(a) Female

0  

0.05  

0.1  

0.15  

0.2  

0.25  

0.3  

Black   White  

Average  EER

 

Training  Set  Ethnicity  

H-­‐3P-­‐LBP   3P-­‐LBP  

0  

0.05  

0.1  

0.15  

0.2  

0.25  

0.3  

Black   White  

Average  EER

 

Training  Set  Ethnicity  

H-­‐3P-­‐LBP   3P-­‐LBP   HOG  

0  

0.05  

0.1  

0.15  

0.2  

0.25  

0.3  

Female   Male   Both  

Average  EER

 

Training  Set  Gender  

H-­‐3P-­‐LBP   3P-­‐LBP   HOG  

0  

0.05  

0.1  

0.15  

0.2  

0.25  

0.3  

Female   Male   Both  

Average  EER

 

Training  Set  Gender  

H-­‐3P-­‐LBP   3P-­‐LBP   HOG  

(b) Male

Fig. 8. Figures (a) and (b) show the avg. EER obtained from testing onfemale and male images, respectively. Training is performed using femaleand male images individually.

the periocular region starts to thin by losing the fat, whichcauses sagging and hence a major texture change.

F. Effects of Gender and Ethnicity

Periocular region as a soft biometric carries reliable in-formation about the gender and ethnicity [7]. Periocularfeatures such as eyebrows, eyelashes, etc. can be used todiscriminate between male and female, since females aremore likely to wear makeup than males. Similarly, the shapeof the eye region can vary across ethnicities (e.g. eyes ofChinese, Japanese, etc.). However, the significant effect ofthese factors in periocular verification performance has notbeen analyzed, which can provide useful insight on thetraining set. The dataset is divided into subsets based ontheir gender (male and female) and ethnicity (Whites andAfrican/Americans) individually. Each subset is used as atraining set to individually train the classifier and tested usingthe remaining subset. For the case of training and testingon the same subset, a 5-fold cross-validation is performedas explained in Section IV-C. In addition, equal number ofimage pairs from the subsets corresponding to the genderbased and ethnicity based experiments were used as trainingset, while the remaining image pairs were used for testing.Figures 8 and 9 shows the average EER obtained using

0  

0.05  

0.1  

0.15  

0.2  

0.25  

0.3  

African/American   White  

Average  EER

 

Training  Set  Ethnicity  

H-­‐3P-­‐LBP   3P-­‐LBP   HOG  

0  

0.05  

0.1  

0.15  

0.2  

0.25  

0.3  

African/American   White  

Average  EER

 

Training  Set  Ethnicity  

H-­‐3P-­‐LBP   3P-­‐LBP   HOG  

0  

0.05  

0.1  

0.15  

0.2  

0.25  

0.3  

0.35  

Black   White   Both  

Average  EER

 

Training  Set  Ethnicity  

H-­‐3P-­‐LBP   3P-­‐LBP   HOG  

0  

0.05  

0.1  

0.15  

0.2  

0.25  

0.3  

Black   White   Both  

Average  EER

 

Training  Set  Ethnicity  

H-­‐3P-­‐LBP   3P-­‐LBP   HOG  

(a) African/American

0  

0.05  

0.1  

0.15  

0.2  

0.25  

0.3  

African/American   White  

Average  EER

 

Training  Set  Ethnicity  

H-­‐3P-­‐LBP   3P-­‐LBP   HOG  

0  

0.05  

0.1  

0.15  

0.2  

0.25  

0.3  

African/American   White  

Average  EER

 

Training  Set  Ethnicity  

H-­‐3P-­‐LBP   3P-­‐LBP   HOG  

0  

0.05  

0.1  

0.15  

0.2  

0.25  

0.3  

0.35  

Black   White   Both  

Average  EER

 

Training  Set  Ethnicity  

H-­‐3P-­‐LBP   3P-­‐LBP   HOG  

0  

0.05  

0.1  

0.15  

0.2  

0.25  

0.3  

Black   White   Both  

Average  EER

 

Training  Set  Ethnicity  

H-­‐3P-­‐LBP   3P-­‐LBP   HOG  

(b) Whites

Fig. 9. Figures (a) and (b) show the avg. EER obtained from testing onAfrican/American and White images, respectively. Training is performedusing African/American and White images individually.

various descriptors for the gender based and ethnicity basedperiocular verification, respectively.

From the figure 8, it can be seen that for H-3P-LBP and3P-LBP, the verification of female image pairs is less errorprone when individual training models (female and male)are used (see figure 8(a)). However, it is to be noted thatverification of male image pairs provides better performancewhen trained using both female and male image pairs. Thisis probably due to a more accurate learning of the periocularfeatures attributed to females from the female image pairs inthe training set.

In case of ethnicity based experiments (see figure 9),all the descriptors perform near equally when trained andtested with image pairs from the same ethnicity. However,the performance of the classifier varies with the use of eachdescriptor when trained and tested on different ethnicities.This indicates the presence of periocular features that areunique to an ethnic group. It is also to be noted that the EERrates from the classifier trained on image pairs from bothethnic group are nearly equal with those when trained onindividual ethnic group. This is due to training the classifierwith features unique to each ethnic group, thus providinga generalized trained model that is capable of classifyingimage pairs from those ethnic groups.

Page 5: Sample New

V. CONCLUSION

REFERENCES

[1] P. Heisele, B. Ho, J. Wu, and T. Poggio, ”Face recognition: component-based versus global approaches”, Computer Vision and Image Under-standing, vol. 91 no. 1-2, pp. 6-21, 2009.

[2] P. Miller, A. Rawls, S. Pundlik, and D. Woodard, ”Personal identifica-tion using periocular skin texture”, In ACM Symposium on AppliedComputing, 2009.

[3] D. Woodard, S. Pundlik, J. Lyle, and P. Miller, ”Periocular regionappearance cues for biometric identification”, In CVPR Workshop onBiometrics, 2010.

[4] D. Woodard, S. Pundlik, P. Miller, R. Jillela, and A. Ross, ”On thefusion of periocular and iris biomterics in non-ideal imagery”, In Proc.of the IAPR Intl. Conf. on Pattern Recognition, 2010.

[5] P. E. Miller, J. R. Lyle, S. J. Pundlik, and D. L. Woodard, ”Performanceevaluation of local appearance based periocular recognition”, FourthIEEE Intl. Conf. on Biometrics: Theory, Applications, and Systems,2010.

[6] U. Park, A. Ross, and A. K. Jain, ”Periocular Biometrics in theVisible Spectrum: A Feasibility Study”, In Proc. of IEEE Intl. Conf.on Biometrics: Theory, Applications, and Systems, 2009.

[7] J. R. Lyle, P. E. Miller, S. J. Pundlik, and D. L. Woodard, ”SoftBiometric Classification Using Local Appearance Periocular RegionFeatures”, Pattern Recognition, vol. 45, no. 11, pp. 3877-3885, 2012.

[8] C. N. Padole and H. Proenca, ”Periocular recognition: Analysis ofperformance degradation factors”, 5th IAPR Intl. Conf. on Biometrics(ICB), pp. 439-445, 2012.

[9] Y. Dong and D. L. Woodard, ”Eyebrow shape-based features forbiometric recognition and gender classification: A feasibility study”,In IEEE Intl. Joint Conf. on Biometrics (IJCB), pp. 1-8, 2011.

[10] K. Hollingsworth, K. W. Bowyer, and P. J. Flynn, ”Useful features forhuman verification in near-infrared periocular images”, Image Visionand Computing, vol. 29, no. 11, pp. 707-715, 2011.

[11] S. Bharadwaj , H. S. Bhatt , M. Vatsa and R. Singh ”Periocular biomet-rics: When iris recognition fails”, Proc. IEEE Int. Conf. Biometrics:Theory, Applications, and Systems, pp. 1-6 2010.

[12] F. J. Xu, K. Luu, M. Savvides, T. D. Bui, and C. Y. Suen, ”Ageinvariant face recognition based on periocular biometrics”, Intl. JointConf. on Biometrics (IJCB), 2011.

[13] Nowak, E., Jurie, F., ”Learning visual similarity measures for com-paring never seen objects”, In IEEE Conf. on Computer Vision andPattern Recognition, 2007.

[14] Shechtman, E., Irani, M., ”Matching local self-similarities acrossimages and videos”, CVPR, pp. 1-8, 2007.

[15] L. Wolf, T. Hassner, and Y. Taigman, ”Descriptor Based Methods inthe Wild”, In: Faces in Real-Life Images Workshop in ECCV, 2008.

[16] H. Ling, S. Soatto, N. Ramanathan, and D. Jacobs, ”Face verificationacross age progression using Discriminative Methods”’, IEEE Trans-actions on Information Forensics and Security, vol. 5, no. 1, pp. 82-91,2010.

[17] B. Moghaddam, W. Wahid, and A. Pentland, ”‘Beyond Eigenfaces:Probabilistic Matching for Face Recognition”’, pp. 30-35, 1998.

[18] K. Jonsson, J. Kittler, Y. Li, and J. Matas, ”‘Support Vector Machinesfor face authentication”’, vol. 20, no. 5-6, pp. 369-375, 2002.

[19] P. J. Phillips, ”‘Support Vector machines applied to face recognition”’,Advances in Neural Information Processing Systems 16 (NIPS), vol.2 pp. 803-809, 1999.

[20] Y. Freund and R. E. Schapire, ”‘Game theory, on-line predictionand boosting”’, In Proceedings of the Ninth Annual Conference onComputational Learning Theory, pp. 324-332, 1996.

[21] P. J. Philips, J. R. Beveridge, B. Draper, G. Givens, A. O’Toole, D.Bolme, J. Dunlop, Y. M. Lui, H. Sahibzada, and S. Weimer, ”AnIntroduction to the good, the bad, and the ugly face recognition

challenge problem”, Image Vision and Computing, vol. 30, no. 3, pp.206-216, 2012.

[22] T. Ojala, M. Pietikainen, and T. Maenpaa, ”Multiresolution gray-scaleand rotation invariant texture classification with local binary patterns”,IEEE Transactions on Pattern Analysis and Machine Intelligence, pp.971-987, 2002.

[23] The FG-NET Aging Database, http://www.fgnet.rsunit.com/,http://www-prima.inrialpes.fr/FGnet/, 2010.

[24] Karl Ricanek Jr and Tamirat Tesafaye, ”MORPH: A LongitudinalImage Database of Normal Adult Age-Progression,” IEEE 7th Inter-national Conference on Automatic Face and Gesture Recognition, pp.341-345, 2006.

[25] F. Yoav and R. E. Schapire, ”A Short Introduction to Boosting, Journalof Japanese Society for Artificial Intelligence, pp. 771-780, 1999.

[26] Maenpaa, Topi and Pietikainen, Matti, ”Multi-scale binary patternsfor texture analysis, Proc. 13th Scandinavian conference on Imageanalysis, pp. 885–892, 2003.