29
Integrated Multilevel Image Fusion and Match Score Fusion of Visible and Infrared Face Images for Robust Face Recognition Richa Singh, Mayank Vatsa, and Afzel Noore Lane Department of Computer Science and Electrical Engineering West Virginia University, USA Abstract This paper presents an integrated image fusion and match score fusion of mul- tispectral face images. The fusion of visible and long wave infrared face images is performed using 2ν -Granular SVM which uses multiple SVMs to learn both the local and global properties of the multispectral face images at different granularity levels and resolution. The 2ν -GSVM performs accurate classification which is sub- sequently used to dynamically compute the weights of visible and infrared images for generating a fused face image. 2D log polar Gabor transform and local binary pattern feature extraction algorithms are applied to the fused face image to extract global and local facial features respectively. The corresponding match scores are fused using Dezert Smarandache theory of fusion which is based on plausible and paradoxical reasoning. The efficacy of the proposed algorithm is validated using the Notre Dame and Equinox databases and is compared with existing statistical, learning, and evidence theory based fusion algorithms. Key words: Face recognition, Image fusion, Match score fusion, Granular computing, Support Vector Machine, Dezert Smarandache Theory 1 Introduction Current face recognition systems capture faces of cooperative individuals in a controlled environment as part of the face recognition process. It is therefore Email address: {richas, mayankv, noore}@csee.wvu.edu (Richa Singh, Mayank Vatsa, and Afzel Noore). Preprint submitted to Elsevier Science 6 July 2007

Integrated Multilevel Image Fusion and Match Score …richa/papers/GSVM-DSm.pdf · Integrated Multilevel Image Fusion and ... mal face recognition using several face recognition

Embed Size (px)

Citation preview

Page 1: Integrated Multilevel Image Fusion and Match Score …richa/papers/GSVM-DSm.pdf · Integrated Multilevel Image Fusion and ... mal face recognition using several face recognition

Integrated Multilevel Image Fusion and

Match Score Fusion of Visible and Infrared

Face Images for Robust Face Recognition

Richa Singh, Mayank Vatsa, and Afzel Noore

Lane Department of Computer Science and Electrical EngineeringWest Virginia University, USA

Abstract

This paper presents an integrated image fusion and match score fusion of mul-tispectral face images. The fusion of visible and long wave infrared face images isperformed using 2ν-Granular SVM which uses multiple SVMs to learn both thelocal and global properties of the multispectral face images at different granularitylevels and resolution. The 2ν-GSVM performs accurate classification which is sub-sequently used to dynamically compute the weights of visible and infrared imagesfor generating a fused face image. 2D log polar Gabor transform and local binarypattern feature extraction algorithms are applied to the fused face image to extractglobal and local facial features respectively. The corresponding match scores arefused using Dezert Smarandache theory of fusion which is based on plausible andparadoxical reasoning. The efficacy of the proposed algorithm is validated usingthe Notre Dame and Equinox databases and is compared with existing statistical,learning, and evidence theory based fusion algorithms.

Key words:Face recognition, Image fusion, Match score fusion, Granular computing, SupportVector Machine, Dezert Smarandache Theory

1 Introduction

Current face recognition systems capture faces of cooperative individuals in acontrolled environment as part of the face recognition process. It is therefore

Email address: {richas, mayankv, noore}@csee.wvu.edu (Richa Singh,Mayank Vatsa, and Afzel Noore).

Preprint submitted to Elsevier Science 6 July 2007

Page 2: Integrated Multilevel Image Fusion and Match Score …richa/papers/GSVM-DSm.pdf · Integrated Multilevel Image Fusion and ... mal face recognition using several face recognition

possible to control the lighting, pose, background, and quality of images. Underthese conditions, the performance of face recognition algorithms is greatlyenhanced. However, there is still a need for more robust and efficient facerecognition algorithms to address challenges such as changes in illumination,variations in pose and expression, variations in facial features due to aging,and altered appearances due to disguise [1].

Face recognition algorithms generally use visible spectrum images for recog-nition because they provide clear representation of facial features and facetexture to differentiate between two individuals. However, visible spectrumimages also possess several other properties which affect the performance ofrecognition algorithms. For example, changes in lighting affect the represen-tation of visible spectrum images and can influence feature extraction. Othervariations in face images such as facial hairs, wrinkles, and expression are alsoevident in visible spectrum images and these variations increase the false re-jection rate of face recognition algorithms. To address the challenges posed byvisible spectrum images, researchers have used infrared images for face recog-nition [2], [3], [4], [5]. Among all the infrared spectrum images, long waveinfrared (LWIR) images possess several properties that are complementaryto visible images. Long wave infrared or thermal images are captured in therange of 8-12 µm. These images represent the heat pattern of the object andare invariant to illumination and expression. Face images captured in longwave infrared spectrum have less intra-class variation and help to reduce thefalse rejection rate of recognition algorithms. These properties of long waveinfrared and visible images can be combined to improve the performance offace recognition algorithms.

In literature, researchers have compared the performance of visible and ther-mal face recognition using several face recognition algorithms. These resultsshow that for variation in expression and illumination, thermal images pro-vide better recognition performance compared to visible images [2], [6], [7].Further, several fusion algorithms have been proposed to fuse the informa-tion extracted from visible and LWIR face images at image level [8], [9], [10],[11], feature level [10], [11], [12], match score level [12], and decision level[12]. Information fusion of multispectral images provides better performancecompared to either visible or infrared spectrum images. However, research inmultispectral information fusion is relatively new and intelligent techniquessuch as Granular computing, Support Vector Machine (SVM), and theory ofevidence are not explored. These intelligent techniques can enhance the recog-nition performance by providing better generalization capabilities to handleimprecise information.

In this paper, we propose algorithms to fuse long wave infrared and visible faceimages at image level and match score level. We first propose the formulationof 2ν-Granular Support Vector Machine (2ν-GSVM) for pattern classification

2

Page 3: Integrated Multilevel Image Fusion and Match Score …richa/papers/GSVM-DSm.pdf · Integrated Multilevel Image Fusion and ... mal face recognition using several face recognition

which is used in the proposed image fusion algorithm. The proposed imagefusion algorithm learns the properties of the multispectral face images at dif-ferent resolution and granularity levels to determine optimal information andcombines them to generate a fused image. We then apply 2D log polar Gabor[13] and local binary pattern [14] face recognition algorithms to extract globaland local features from the fused image. The match scores obtained by match-ing these features are fused using the proposed Dezert Smarandache (DSm)fusion algorithm [15], [16]. The proposed DSm match score fusion algorithmis based on evidence theory which performs fusion depending on the evidenceand belief of the match scores. On the Notre Dame [17], [18] and Equinox [19]face databases, this integrated hierarchical scheme yields verification accuracyof more than 99.5%.

We have organized the proposed algorithms in four sections. Section 2 de-scribes the proposed formulation of 2ν-Granular SVM. In Section 3, we de-scribe the proposed visible and infrared face image fusion algorithm whichdynamically and locally computes the weights using 2ν-GSVM to generatethe fused face image. We then describe the overview of DSm theory and theproposed DSm match score fusion algorithm in Section 4. Finally in Section 5,we combine the two levels of fusion to present the integrated multilevel imagefusion and match score fusion algorithm. Section 6 describes the databasesand existing algorithms used for validation of the proposed algorithms. Theexperimental results are summarized in Section 7.

2 2ν-Granular Support Vector Machine

Support vector machine is widely used in classification problems because itis designed to circumvent the overfitting problem [20], [21], [22] and is gener-alized to optimally perform classification on new training data. In literature,different variants of support vector machine have been proposed such as SVM,ν-SVM, and dual ν-SVM (2ν-SVM). These variants are designed to improvethe classification accuracy and address other challenges such as reduction intime complexity and classification with disparate number of training samplesper class. In our previous research [12], we used 2ν-SVM for feature fusion,match score fusion, and expert fusion. We observed that 2ν-SVM provides bet-ter classification accuracy compared to classical SVM and is computationallymore efficient. Recently, Tang et al. [23], [24], [25], [26], applied the conceptof granular computing [27], [28], [29], [30] to SVM and proposed GranularSVM (GSVM) which is more adaptive to the data distribution in comparisonto SVM. Tang et al. have also shown that for several classification applica-tions, GSVM outperforms SVM both in terms of classification accuracy andcomputational time. In this paper, we extend the formulation to 2ν-GranularSupport Vector Machine (2ν-GSVM) which embodies the properties of both

3

Page 4: Integrated Multilevel Image Fusion and Match Score …richa/papers/GSVM-DSm.pdf · Integrated Multilevel Image Fusion and ... mal face recognition using several face recognition

GSVM and 2ν-SVM. We first describe the formulation of 2ν-SVM [22] followedby the granular modeling of 2ν-SVM.

Let {xi, yi} be a set of N data vectors with xi ∈ <d, yi ∈ (+1,−1), andi = 1, ..., N . xi is the ith data vector that belongs to the binary class yi.According to Chew et al., [22] the objective of training 2ν-SVM is to find thehyperplane that separates two classes with the widest margins, i.e.,

wϕ(x) + b = 0 (1)

subject to,

yi (wϕ(xi) + b) ≥ (ρ− ψi), ρ, ψi ≥ 0 (2)

to minimize,

1

2‖w‖2 −

i

Ci(νρ− ψi) (3)

where ρ is the position of the margin and ν is the error parameter. ϕ(x) is themapping function used to map the data space to the feature space and providegeneralization for the decision function that may not be a linear function ofthe training data. Ci(νρ − ψi) is the cost of errors, w is the normal vector, bis the bias, and ψi is the slack variable for classification errors. Slack variablesare introduced to handle classes which cannot be separated by a hyperplane.Let ν+ and ν− be the error parameters for training the positive and negativeclasses respectively. Using these, the error parameter, ν, is calculated as,

ν =2ν+ν−ν+ + ν−

, 0 < ν+ < 1 and 0 < ν− < 1. (4)

Error penalty Ci is defined as,

Ci =

C+, if yi = +1

C−, if yi = −1(5)

where,

C+ =

[n+

(1 +

ν+

ν−

)]−1

(6)

C− =

[n−

(1 +

ν−ν+

)]−1

(7)

4

Page 5: Integrated Multilevel Image Fusion and Match Score …richa/papers/GSVM-DSm.pdf · Integrated Multilevel Image Fusion and ... mal face recognition using several face recognition

and n+ and n− are the number of training points for the positive and negativeclasses respectively. Further, the Wolfe dual formulation of 2ν-SVM can bewritten as,

L =∑

i

αi −1

2

i,j

αi αj yi yj K(xi, xj)

(8)

where i, j ∈ 1, ..., N , αi, αj are the Lagrange multipliers and K(·) is the kernelfunction. Finally, iterative decomposition training based optimization algo-rithm [22] is used to train the 2ν-Granular SVM.

In this paper, we extend the formulation of 2ν-SVM by using the granular com-puting approach similar to [25]. Granular computing is a knowledge-orienteddivide and conquer approach to problem solving [27], [28], [29], [30]. In gran-ular computing, information is divided into subproblems called granules andthese subproblems are solved individually at different granularity levels. Usingthis concept, 2ν-GSVM is formulated as follows:

Let the complete feature space be divided into k subspaces with one 2ν-SVMoperating on each subspace. The ith 2ν-SVM is represented by 2νSVMi wherei = 1, 2, ..., k. From each of the subspace, we obtain the corresponding Li usingEquation 8. We then compute the compound margin width W by using allthe Li values.

W =

∣∣∣∣∣k∑

i=1

tit(2νSV Mi :→ Li) − L0

∣∣∣∣∣ (9)

where ti is the number of training data in the ith subspace and t =∑k

i=1 ti.2νSVMi :→ Li represents the SVM operating on the ith subspace. 2ν-SVMlearning yields Li at local level, and L0 is obtained by learning another 2ν-SVM on the complete feature space at global level. This equation providesthe margin width associated to a hyperplane. However, there are differentways to divide the feature space and hence different hyperplanes associatedwith each of the granule generation method can be obtained. We compute theclassification accuracy of all the hyperplanes on the training data and thenselect the hyperplane that optimally classifies the training data. In contrastto a single SVM that deals with large parameter space and results in largetraining time, 2ν-Granular SVM uses multiple SVMs to learn both the localand global properties of the training data at different granularity levels. This2ν-GSVM is then used for fusing multispectral face images.

5

Page 6: Integrated Multilevel Image Fusion and Match Score …richa/papers/GSVM-DSm.pdf · Integrated Multilevel Image Fusion and ... mal face recognition using several face recognition

VisibleFace Image

LWIRFace Image

3-LevelDWT

2 -GSVMν

3-LevelDWT

2 -GSVMν

Fused DWTCoefficients

IDWT

FusedFace Image

Fig. 1. Schematic diagram of the proposed 2ν-GSVM image fusion algorithm.

3 Multispectral Face Image Fusion using 2ν-GSVM

In multispectral face recognition, visible images provide the reflectance prop-erty and long wave infrared images provide the thermal property. In 2ν-GSVMbased image fusion, we combine these properties to generate a fused imagewhich possess both the properties and can be used to improve the recogni-tion performance. Although there are several multispectral face image fusionalgorithms in literature, they have some limitations which affect the face recog-nition performance. Genetic algorithm based fusion proposed by Bebis et al.[31] suffers from making a good choice of fitness function. Fusion algorithmproposed by Kong et al. [4] suffers from the empirical constant weights whichare assigned to the wavelet coefficients of visible and LWIR images. In realworld applications, weights should be dynamically and locally assigned foroptimal multispectral information fusion. In this section, we propose the mul-tispectral face image fusion algorithm which dynamically and locally computesthe weights for fusion using 2ν-GSVM. Fig. 1 illustrates the steps involved inthe proposed image fusion algorithm. The algorithm is divided into two steps:image registration and image fusion.

6

Page 7: Integrated Multilevel Image Fusion and Match Score …richa/papers/GSVM-DSm.pdf · Integrated Multilevel Image Fusion and ... mal face recognition using several face recognition

3.1 Mutual Information based Multispectral Face Image Fusion

Visible and infrared images captured at different time instances can have vari-ations due to camera angle, expression, and geometric deformations. To opti-mally fuse two multispectral images, we first need to minimize the linear andnon-linear differences between the two images. In this section, we propose theuse of mutual information based registration algorithm for registering visibleand thermal face images. Mutual information is a concept from informationtheory in which statistical dependence is measured between two random vari-ables. Researchers in medical imaging have used mutual information based reg-istration algorithms to effectively fuse images from different modalities such asCT and MRI [32], [33]. Registration of multispectral face images is describedas follows:

Let V and I be the input visible and infrared face images for registration.Mutual information between the two face images can be represented as,

M(V, I) = H(V ) +H(I) −H(V, I) (10)

H(·) is the entropy of the image and H(V, I) is the joint entropy. RegisteringV with respect to I requires maximization of mutual information between Iand V , thus maximizing the entropy H(V ) and H(I), and minimizing thejoint entropy H(V, I). Mutual information based registration algorithms aresensitive to changes that occur in the distributions as a result of difference inoverlap regions. To address this issue, Studholme et al. [34] proposed normal-ized mutual information which can be represented as,

NM(V, I) =H(V ) +H(I)

H(V, I)(11)

The registration is performed on a transformation space, T , such that

T =

a b 0

c d 0

e f 1

(12)

where, a, b, c, d are shear, scale, and rotation parameters, and e, f are thetranslation parameters. Using the normalized mutual information, we definea search strategy to find the transformation parameters, T ∗, by exploring the

7

Page 8: Integrated Multilevel Image Fusion and Match Score …richa/papers/GSVM-DSm.pdf · Integrated Multilevel Image Fusion and ... mal face recognition using several face recognition

Registered VisibleFace Image

LWIRFace Image

VisibleFace Image

Fig. 2. Example of visible and infrared image registration on the Notre Dame facedatabase [18]. Visible image is registered with respect to the LWIR image. Size ofdetected visible face image is 855 × 1024 and infrared face image is 115× 156.

search space, T .

T ∗ = arg max{T}{NM(I, T (V ))} (13)

Multispectral face images V and I are thus registered using the transformationparameters T ∗. This registration algorithm is linear in nature. To accommo-date the non-linear variation in face images, we apply multiresolution imagepyramid scheme in which we first build Gaussian pyramid of both visible andthermal face images. Registration parameters are estimated at the coarsestlevel and used to warp the face image in the next level of the pyramid. Theprocess is iteratively repeated through each level of the pyramid and a finaltransformed visible face image is obtained at the finest pyramid level. In thismanner, we handle the large global variations at the coarsest resolution leveland local non-linear variations at the finest resolution level. Fig. 2 shows ex-amples of the registration algorithm in which visible face image is registeredwith respect to LWIR face image.

3.2 Proposed 2ν-GSVM Image Fusion Algorithm

In the proposed face image fusion algorithm, registered visible and infraredface images are fused using 2ν-GSVM and Discrete Wavelet Transform (DWT)[35]. The fusion algorithm uses the activity level, a, of face images which isdefined as,

a =

√√√√√ 1

XY

X−1∑

i=0

Y −1∑

j=1

{(V (i, j)− V (i, j − 1)}2 +Y −1∑

j=0

X−1∑

i=1

{(V (i, j) − V (i− 1, j)}2

(14)

8

Page 9: Integrated Multilevel Image Fusion and Match Score …richa/papers/GSVM-DSm.pdf · Integrated Multilevel Image Fusion and ... mal face recognition using several face recognition

where V is the visible face image, and X and Y are the rows and columns ofthe face image respectively. The proposed fusion algorithm is divided into twoparts: (1) training and (2) classification and fusion.

Training 2ν-GSVM: We learn the 2ν-GSVM for image fusion by using theactivity levels of labeled visible and infrared training face images. The trainingalgorithm is described as follows:

Step 1: Visible and infrared training face images are decomposed using DWTto obtain 3-level approximation, horizontal, vertical, and diagonal subbands.

Step 2: Let VLLj , VLHj , VHLj , and VHHj be the subbands of visible face imagewhere j = 1, 2, 3 represents the decomposition levels. Similarly, let ILLj , ILHj ,IHLj, and IHHj be the subbands of infrared face image corresponding to eachdecomposition level, j. Each subband of both visible and infrared face imagesis divided into windows of size 8 × 8 and the activity level of each window iscomputed using Equation 14.

Step 3: The activity levels of all labeled training face images are used as inputto 2ν-GSVM. In training, two 2ν-GSVMs are learned, one for visible faceimages and another for infrared face images.

Step 4: 2ν-GSVM trained for visible images classifies the activity levels ofvisible spectrum face images as Good or +1 and Bad or −1. Similarly, 2ν-GSVM trained for infrared face images classifies the activity levels of infraredface images into Good or Bad class.

Classification and Fusion: We classify the properties of visible and infraredface images using trained 2ν-GSVMs. This classification is used to dynamicallycompute the weights of visible and infrared face images in multispectral imagefusion.

Step 1: Visible and infrared face images of an individual are provided as input.Similar to Steps 1 and 2 of the training algorithm, both the input face imagesare decomposed into 3-level DWT and activity levels of 8 × 8 windows arecomputed. Let aV and aI be the activity levels computed from visible andinfrared face images respectively.

Step 2: 2ν-GSVM classifier is used to classify the activity levels of differentsubbands of visible face images as Good or Bad. A binary decision matrix, dV ,is generated which contains value 1 if the activity level is Good and 0 if theactivity level is Bad.

Step 3: Similar to Step 2, activity levels of infrared face image are classifiedand a binary decision matrix, dI , is generated.

9

Page 10: Integrated Multilevel Image Fusion and Match Score …richa/papers/GSVM-DSm.pdf · Integrated Multilevel Image Fusion and ... mal face recognition using several face recognition

Step 4: Weight matrices ωV and ωI are computed using binary decision ma-trices (dV and dI ) and the following three conditions:

(1) If dV (i) = dI(i) = 1, then ωV (i) = ωI(i) = 0.5.(2) If dV (i) = 1 and dI(i) = 0, then ωV (i) > ωI(i) and

ωV (i) =|aV (i) + 2aI(i) − aImedian

|aV (i) + aI(i)

(15)

ωI(i) =|aImedian

− aI(i)|aV (i) + aI(i)

(16)

(3) If dV (i) = 0 and dI(i) = 1, then ωV (i) < ωI(i) and

ωV (i) =|aVmedian

− aV (i)|aV (i) + aI(i)

(17)

ωI(i) =|aI(i) + 2aV (i)− aVmedian

|aV (i) + aI(i)

(18)

where, i is the window count, and aVmedianand aImedian

are the medianvalues of aV and aI matrices respectively. Further, in all three cases,ωV (i) + ωi(i) = 1.

In condition 1, the activity levels of both visible and infrared image windowsare classified as Good and hence equal weights are assigned. Condition 2 statesthat if the activity level of window corresponding to visible face image is clas-sified as Good and the activity level of the window corresponding to infraredface image is classified as Bad, then higher weight is assigned to the visibleface image window. In condition 3, higher weight is assigned to the infraredface image window because 2ν-GSVM classifies the activity level of visible faceimage window as Bad and the activity level of infrared face image window asGood.

Step 5: Visible and infrared face images are then fused using Equation 19.

Fj8×8(i) = ωV (i)Vj8×8(i) + ωI(i)Ij8×8(i) (19)

where, Fj is the fused subband, j represents the approximate, vertical, hor-izontal, and diagonal subbands, subscript 8 × 8 denotes that the fusion isperformed at window level of size 8 × 8, and i represents the window count.

Step 6: Finally, inverse DWT is applied on fused subbands to generate thefused multispectral face image, F . Fig. 3 shows an example of visible, infrared,and fused face images of an individual.

10

Page 11: Integrated Multilevel Image Fusion and Match Score …richa/papers/GSVM-DSm.pdf · Integrated Multilevel Image Fusion and ... mal face recognition using several face recognition

Registered VisibleFace Image

LWIRFace Image

FusedFace Image

Fig. 3. Sample result of the proposed image fusion algorithm.

4 Match Score Fusion using DSm Theory

In multimodal biometrics, researchers have proposed several match score fu-sion algorithms such as AND/OR rule [36], Sum rule [36], and SVM fusion [37].Another mathematical paradigm of information fusion is based on the theoryof evidence. Dempster Shafer theory (DST) [38] based fusion algorithm is oneexample of this paradigm in which uncertain and fuzzy information are ef-ficiently fused. In multimodal biometrics, it has been shown that DS theorybased fusion algorithms perform better compared to existing fusion algorithms[39]. However, DS theory has some limitations as reported by Zadeh [40], [41],[42], Dubois and Prade [43], and Voorbraak [44]. Researchers have shown thatresults for DST are not trustworthy when conflict between different sourcesis large. Other limitations related to Dempster rule of combination are re-ported in [15]. Recently, Dezert Smarandache theory based fusion algorithm[15], [16] has been proposed to circumvent the limitations of other evidencetheory based fusion algorithms. DSm theory is a powerful mathematical modelfor information fusion which includes both Bayes theory and DS theory as spe-cial cases. In this section, we first present a brief overview of DSm theory andhybrid DSm rule of combination [16] followed by the proposed match scorefusion algorithm.

4.1 Overview of Dezert Smarandache Theory

Dezert Smarandache (DSm) theory is a powerful tool for representing and fus-ing uncertain or conflicting knowledge. It can solve complex static or dynamicfusion problems using plausible and paradoxical reasoning [15], [16]. Sinceidentity verification is a two class problem with the classes being genuine andimpostor, we explain DSm theory for a two class problem.

Let Θ = {θ1 , θ2} be the frame of discernment which consists of a finite setof exhaustive and mutually exclusive hypothesis. Hyperpower set of the frameof discernment is defined as DΘ = {∅, θ1, θ2, θ1 ∪ θ2, θ1 ∩ θ2}. A mapping m(·)

11

Page 12: Integrated Multilevel Image Fusion and Match Score …richa/papers/GSVM-DSm.pdf · Integrated Multilevel Image Fusion and ... mal face recognition using several face recognition

on Θ is defined as m(·) = DΘ → [0, 1], such that,

A∈DΘ

m(A) = 1 (20)

m(A) is called the generalized basic belief assignment (gbba) of A and m(∅) =0. Further, a generalized belief functionBel is a mapping functionBel : DΘ →[0, 1] such that

Bel(A) =∑

X⊆A,X∈DΘ

m(X) (21)

More specifically,

Bel(A)Θ,DΘ

y,t [Ey,t](w0 ∈ A) = x (22)

This equation denotes the degree of belief x of the classifier y at time t whenw0 belongs to A and A ∈ DΘ. Belief is based on evidential corpus Ey,t heldby y at time t. To simplify, generalized belief function can also be writtenas Bel(A). Further, generalized belief function, Bel, uniquely corresponds togeneralized basic belief assignment m and vice versa.

For fusing two information sources, X and Y , the DSm rule of combination[16] is defined as,

mM(Θ)(A) = ψ(A) [S1(A) + S2(A) + S3(A)] (23)

where M(Θ) is the model over which DSm theory operates and ψ(A) is thecharacteristic non-emptiness function of A which is 1 if A /∈ ∅ and 0 otherwise.S1(A), S2(A), and S3(A) are defined as,

S1(A) =∑

(X,Y ∈DΘ, X∩Y =A) m1(X)m2(Y )

S2(A) =∑

(X,Y ∈Φ, [υ=A]∨ [(υ∈Φ)∧ (A=It)]) m1(X)m2(Y )

S3(A) =∑

(X,Y ∈DΘ, X∪Y =A, X∩Y ∈Φ) m1(X)m2(Y )

(24)

where It is total ignorance and is the union of all θi (i = 1, 2), i.e. It = θ1∪ θ2.Φ = {Φ, φ} is the set of all elements of DΘ which are empty under theconstraints of some specific problem, and φ is the empty set. υ = u(X)∪u(Y ),where u(X) is the union of all singletons θi that compose X and Y . Here,S1(A) corresponds to the classical DSm rule on the free DSm model [15],S2(A) represents the mass of all relatively and absolutely empty sets which is

12

Page 13: Integrated Multilevel Image Fusion and Match Score …richa/papers/GSVM-DSm.pdf · Integrated Multilevel Image Fusion and ... mal face recognition using several face recognition

2D log PolarGabor Feature

ExtractionMatch Score

Match Score

DSm MatchScore Fusion

Face Image

Local BinaryPattern Feature

Extraction

Accept/Reject

GeneralizedBasic BeliefAssignment

GeneralizedBasic BeliefAssignment

Fig. 4. Block diagram of the proposed DSm match score fusion algorithm in whichmatch scores obtained from two classifiers are fused.

transferred to the total or relative ignorance, and S3(A) transfers the sum ofrelative empty sets to the non-empty sets. An excellent description of DSmtheory is presented in [16].

Probability based approaches have limitations because they deal with basicprobability assignmentm(·) ∈ [0, 1] and m(θ1)+m(θ2) = 1. Further, DempsterShafer theory deals with basic belief assignmentm(·) ∈ [0, 1] such thatm(θ1)+m(θ2)+m(θ1∪θ2) = 1. In contrast, DSm theory is more generalized and dealswith belief functions associated with the generalized belief assignment suchthat m(θ1) +m(θ2) +m(θ1 ∪ θ2) +m(θ1 ∩ θ2) = 1.

4.2 Proposed Multimodal Match Score Fusion Algorithm

In this section, we propose a novel match score fusion algorithm using DSmtheory. Fig. 4 shows the steps involved in the proposed match score fusionalgorithm for a single image. Two match scores are computed by matchingthe global features and local features extracted from probe and gallery faceimages. These two match scores are fused using DSm theory. We first define,

• Frame of discernment: Θ = {θgenuine, θimpostor}.• Dedekind lattice:DΘ = {θgenuine, θimpostor, θgenuine∪ θimpostor, θgenuine∩ θimpostor}.

Let s1 and s2 be the two match scores computed from two face recognitionalgorithms. Let us assume that the distribution of match scores to an elementof DΘ is a Gaussian distribution.

p(si, µij , σij) =1

σij

√2πexp

−1

2

{si − µij

σij

}2 (25)

where µij and σij are the mean and standard deviation of the ith classifiercorresponding to the jth element of DΘ. We use this Gaussian distribution tocompute the generalized basic belief assignment,

13

Page 14: Integrated Multilevel Image Fusion and Match Score …richa/papers/GSVM-DSm.pdf · Integrated Multilevel Image Fusion and ... mal face recognition using several face recognition

mi(·) = DΘ\{θgenuine ∪ θimpostor} → [0, 1].

Since in biometrics, a match score can only belong to genuine (θgenuine), im-postor (θimpostor), or conflicting region (θgenuine ∩θimpostor), we set mi(θgenuine ∪θimpostor) = 0.001 and compute the remaining gbbas as follows,

mi(j) =p(si, µij , σij)βij

∑|DΘ |−1j=1 p(si, µij, σij)βij

(26)

where βij is the prior of classifier i corresponding to the jth element ofDΘ\{θgenuine∪θimpostor}. We have used βij as the verification accuracy computed on the train-ing database and its value lies in the range of (0, 1). Generalized basic beliefassignments of the two classifiers m1(·) and m2(·) are computed and fusedusing Equation 27.

mfused = m1 ⊕m2 (27)

where ⊕ represents the hybrid DSm rule of combination defined in Equation23, Section 4.1. Finally, threshold T is used to classify the decision as acceptor reject.

Decision =

accept if mfused ≥ T

reject otherwise(28)

To extend this algorithm to multispectral face images, we separately applythe feature extraction algorithms to the visible face image and the infraredface image. As shown in Fig. 5, we extract the global facial features using 2Dlog polar Gabor transform [13] and the local facial features using local binarypattern [14], and then match these features to compute the correspondingmatch scores. The two match scores for the visible face image are fused usingthe proposed DSm fusion algorithm. Similarly, the two match scores for theinfrared face image are also fused to generate the fused match score. Finally,the fused match scores of the visible face image and the infrared face imageare fused to compute a composite match score for multispectral face images.A decision of accept or reject is made using the composite match score.

5 Integration of Image Fusion and Match Score Fusion

The proposed image and match score level fusion algorithms are integrated tofurther improve the verification performance. Fig. 6 shows the block diagram

14

Page 15: Integrated Multilevel Image Fusion and Match Score …richa/papers/GSVM-DSm.pdf · Integrated Multilevel Image Fusion and ... mal face recognition using several face recognition

2D log PolarGabor Feature

ExtractionMatch Score

Match Score

DSm MatchScore Fusion

VisibleFace Image

Local BinaryPattern Feature

Extraction

2D log PolarGabor Feature

ExtractionMatch Score

Match Score

LWIRFace Image

Local BinaryPattern Feature

Extraction

DSm MatchScore Fusion

andClassification

Accept/Reject

DSm MatchScore Fusion

Fig. 5. Steps involved in the proposed DSm match score fusion algorithm. In thiscase, match scores obtained by applying the two classifiers on multispectral faceimages are fused.

2 - GSVMFusion

ν

LWIRFace Image

VisibleFace Image

FusedFace Image

2 - GSVMFusion

ν

2D log PolarGabor Feature

Matching

GeneralizedBasic BeliefAssignment

DSm MatchScore Fusion

andClassificationLocal Binary

Pattern FeatureMatching

Accept/Reject

GeneralizedBasic BeliefAssignment

Fusion of Multispectral Images Fusion of Match Scores

Fig. 6. Steps involved in the proposed multilevel image fusion and match score fusionalgorithm.

of the integrated multilevel fusion algorithm. Visible and infrared face imagesare first fused using the proposed 2ν-GSVM image fusion algorithm describedin Section 3. 2D log polar Gabor transform [13] and local binary pattern [14]feature extraction algorithms are then applied on the fused multispectral faceimage to extract global and local facial features. Match scores computed bymatching these features are fused using the proposed DSm match score fusionalgorithm described in Section 4.

15

Page 16: Integrated Multilevel Image Fusion and Match Score …richa/papers/GSVM-DSm.pdf · Integrated Multilevel Image Fusion and ... mal face recognition using several face recognition

Table 1Number of visible and infrared image pairs in the training, gallery, and probedatabases.

Face databaseNumber of visible and infrared image pairs in

Training database Gallery database Probe database

Notre Dame 477 159 1815

Equinox 285 95 18715

6 Databases and Existing Algorithms used for Validation

To validate the proposed fusion algorithms, we used the Notre Dame [17], [18]and Equinox/NIST [19] multispectral face databases. In addition, we used 2Dlog polar Gabor transform [13] and local binary pattern algorithms [14] forface verification. For comparing the performance of the proposed algorithms,we used several existing image fusion and match score fusion algorithms. Inthis section, we briefly describe the details of the databases and algorithmsused in our experiments.

6.1 Databases used for Validation

• Notre Dame Face Database: Notre Dame face database [17], [18] con-tains LWIR and visible images from 159 classes with variations in expression,lighting, and time lapse. We have chosen three visible and LWIR face imagewith neutral expression for training database, one neutral visible and LWIRimage for gallery database, and the remaining images comprise the probedataset. Table 1 shows the details of training, gallery, and probe imagesused in the experiments.

• Equinox Face Database: Equinox face database [19] contains long waveinfrared, medium wave infrared, short wave infrared, and visible face imagespertaining to 95 individuals. Long wave infrared images are captured at 8-12 µm, medium wave infrared images at 3-5 µm, and short wave infraredimages at 0.9-1.7 µm. The images are captured under different illuminationconditions and contain variations in expression and glasses. In our research,we have used only long wave infrared images and visible images. The num-ber of LWIR and visible images per class vary from 43 to 516. We chosethree visible and LWIR images with neutral expression and without glassesfor training, one LWIR and visible image with neutral expression, uniformillumination, and no glasses for gallery, and the remaining images as probe.

16

Page 17: Integrated Multilevel Image Fusion and Match Score …richa/papers/GSVM-DSm.pdf · Integrated Multilevel Image Fusion and ... mal face recognition using several face recognition

6.2 Face Recognition Algorithms

We first detect the face region from input images. Visible face image are de-tected using the triangle based face detection algorithm [45] whereas a thresh-olding based face detection algorithm [4] is applied to detected LWIR faceimages. Global and local facial features are extracted from these detected faceimages using the face recognition algorithms described below.

• 2D Log Polar Gabor Transform: In the 2D log polar Gabor transformbased face recognition algorithm, the face image is transformed into polarcoordinates and textural features are extracted using the 2D log polar Gabortransform [13]. These features are matched using the Hamming distance togenerate match scores.

• Local Binary Pattern: In this algorithm, a face image is divided intoseveral regions and weighted LBP features are extracted to generate a fea-ture vector [14]. Matching of two LBP feature vectors is performed usingweighted Chi square distance measure algorithm.

6.3 Existing Fusion Algorithms

To compare the performance of the proposed 2ν-GSVM based image fusionalgorithm, we chose image fusion algorithms proposed by Kong et al. [4] andSingh et al. [10]. Both the algorithms use discrete wavelet transform to fuseinfrared and visible face images. To compare the performance of the proposedDSm based match score fusion algorithm, we chose four existing fusion algo-rithms, Product rule [36], Sum rule [36], SVM fusion [37], and DST fusion[39]. Product rule and Sum rule are based on statistical rules, SVM fusionalgorithm is a learning based algorithm, and DS fusion algorithm is based onevidence theory.

7 Experimental Validation of the Proposed Algorithms

In this section, we perform the experiments to validate the proposed imagefusion and match score fusion algorithms. Using the training images, we trainboth 2ν-GSVM image fusion algorithm and DSm match score fusion algo-rithm. For 2ν-GSVM learning and classification, we used the Radial BasisFunction (RBF) kernel with RBF parameter as 4. The performance is evalu-ated in terms of verification accuracy at 0.01% False Accept Rate (FAR). Theexperimental validation is divided into four parts,

17

Page 18: Integrated Multilevel Image Fusion and Match Score …richa/papers/GSVM-DSm.pdf · Integrated Multilevel Image Fusion and ... mal face recognition using several face recognition

(1) Validation of the proposed 2ν-GSVM image fusion algorithm.(2) Validation of the proposed DSm match score fusion algorithm.(3) Validation of the integrated multilevel image fusion and match score fu-

sion algorithm.(4) Statistical evaluation of the proposed fusion algorithms using Half Total

Error Rate (HTER).

7.1 Validation of the Proposed 2ν-GSVM Image Fusion Algorithm

The performance of the proposed image fusion algorithm is evaluated usingthe Notre Dame and Equinox face databases. We compared the performancewith two existing multispectral face image fusion algorithms referred to asKong image fusion [4] and Singh image fusion [10]. For evaluation, we sepa-rately computed the verification accuracies of visible face image and long waveinfrared face image using both 2D log polar Gabor and local binary patternface verification algorithms. The third and fourth columns of Table 2 summa-rize the verification performance of visible face image and long wave infraredface image respectively using both the verification algorithms. These resultsestablish the baseline for evaluating and comparing the performance of fusionalgorithms. We then compute the verification accuracies with the proposed2ν-GSVM multispectral face image fusion algorithm and existing image fu-sion algorithms. The results summarized in Table 2 show that the proposedimage fusion algorithm outperforms both the existing fusion algorithms by atleast 3.9% for the Notre Dame database and 4.9% for the Equinox database.ROC plots in Fig. 7 and 8 show the results for the Notre Dame and Equinoxface databases respectively.

The proposed 2ν-GSVM image fusion algorithm performs correct classifica-tion of multispectral face information at different levels of granularity whichis subsequently used for computing the dynamic weights of visible and LWIRface images. This granular learning results in better generalization and fusionof high entropy visible and LWIR face features. Further, as shown in Fig. 9,fused face images generated from the proposed image fusion algorithm providemore invariance to illumination compared to the visible images. The fused im-ages also provide more distinguishing information compared to the LWIR faceimages. These properties of the proposed 2ν-GSVM image fusion algorithmleads to improved face verification performance.

7.2 Validation of the Proposed DSm Match Score Fusion Algorithm

To validate the performance of the proposed DSm match score fusion algo-rithm, we compute the composite match score as described in Section 4. First

18

Page 19: Integrated Multilevel Image Fusion and Match Score …richa/papers/GSVM-DSm.pdf · Integrated Multilevel Image Fusion and ... mal face recognition using several face recognition

Table 2Verification performance of the proposed 2ν-GSVM and existing image fusion algo-rithms at 0.01% FAR.

Face database

Verification accuracy (%)

Recognition Visible LWIR Kong image Singh image Proposed image

algorithm image image fusion [4] fusion [10] fusion

Notre Dame

2D log polarGabor

89.36 88.09 86.74 91.88 95.85

Local binarypattern

88.20 87.44 85.87 91.79 94.80

Equinox

2D log polarGabor

78.91 82.75 80.83 90.06 94.98

Local binarypattern

76.80 81.52 80.69 89.93 94.71

the match score is obtained from the visible and the LWIR face images. Notethat these images are not fused. However, as shown in Fig. 5, the fusion occursat match score level. Next, the fused match scores obtained from each image iscombined using the proposed DSm algorithm to generate the composite matchscore. The verification accuracies are summarized in Table 3 and Fig. 10. Onboth the databases, the proposed DSm match score fusion algorithm yieldsmore than 98% accuracy. On the Notre Dame face database, the proposedalgorithm provides at least 1.24% better performance compared to the secondbest Dempster Shafer theory (DST) based match score fusion algorithm [39].On the Equinox database, the proposed algorithm provides 1.57% and 3.03%better verification performance compared to DST and SVM fusion algorithmsrespectively.

The performance of existing match score fusion algorithms decreases when twoface recognition classifiers yield conflicting decisions. For example the classifierusing global features may make a decision to accept and the classifier usinglocal features may make a decision to reject. In such cases, the proposed DSmmatch score fusion algorithm operates on the intersection region, (θgenuine ∩θimpostor), to make an optimal decision using the prior information of classifiers.Existing statistical and learning based fusion algorithms including DST fusionalgorithm do not account for the conflicting region. Hence, the plausible andparadoxical reasoning technique of DSm theory provides better verificationperformance.

19

Page 20: Integrated Multilevel Image Fusion and Match Score …richa/papers/GSVM-DSm.pdf · Integrated Multilevel Image Fusion and ... mal face recognition using several face recognition

(a)

(b)

0 1 2 3 4 5 6 7 80

2

4

6

8

10

12

14

False Accept Rate (%)

Fals

e R

eje

ction R

ate

(%

)

Kong

Singh

Proposed

0 1 2 3 4 5 6 7 80

2

4

6

8

10

12

14

16

False Accept Rate (%)

Fals

e R

eje

ction R

ate

(%

)

Kong

Singh

Proposed

Fig. 7. ROC plots of the proposed 2ν-GSVM and existing image fusion algorithmson the Notre Dame face database. Results are computed using (a) 2D log polarGabor (b) Local binary pattern based verification algorithms.

7.3 Validation of Integrated Multilevel Image and Match Score Fusion Algo-rithm

In previous experiments, we have established that image fusion using 2ν-GSVM improves the verification performance. Also at match score level, thefusion using DSm theory improves the verification accuracy even when the

20

Page 21: Integrated Multilevel Image Fusion and Match Score …richa/papers/GSVM-DSm.pdf · Integrated Multilevel Image Fusion and ... mal face recognition using several face recognition

0 1 2 3 4 5 6 7 8 9 100

2

4

6

8

10

12

14

16

18

20

False Accept Rate (%)

Fals

e R

eje

ction R

ate

(%

)

Kong

Singh

Proposed

0 1 2 3 4 5 6 7 8 9 100

2

4

6

8

10

12

14

16

18

20

False Accept Rate (%)

Fals

e R

eje

ction R

ate

(%

)

Kong

Singh

Proposed

(a)

(b)

Fig. 8. ROC plots of the proposed 2ν-GSVM and existing image fusion algorithmson the Equinox face database. Results are computed using (a) 2D log polar Gabor(b) Local binary pattern based verification algorithms.

multispectral images are not fused. In this section, we integrate both imagefusion and match score fusion algorithms as described in Section 5 to evalu-ate the face verification performance. The validation results of the integratedmultilevel fusion algorithm are summarized in Table 4. The integrated fusionalgorithm yields 99.91% verification accuracy on the Notre Dame face databaseand 99.54% on the Equinox face database. The results also show that the in-tegration of fusion algorithms further improves the verification performance

21

Page 22: Integrated Multilevel Image Fusion and Match Score …richa/papers/GSVM-DSm.pdf · Integrated Multilevel Image Fusion and ... mal face recognition using several face recognition

VisibleImage

LWIRImage

FusedImage

Fig. 9. Results of the proposed 2ν-GSVM image fusion algorithm on the Equinoxface database [19].

22

Page 23: Integrated Multilevel Image Fusion and Match Score …richa/papers/GSVM-DSm.pdf · Integrated Multilevel Image Fusion and ... mal face recognition using several face recognition

Table 3Verification performance of the proposed DSm and existing match score fusion al-gorithms at 0.01% FAR.

Face database

Verification accuracy (%)

Product rule Sum rule SVM DST Proposed

fusion [36] fusion [36] fusion[37] fusion [39] DSm fusion

Notre Dame 95.07 97.12 97.46 97.58 98.82

Equinox 93.20 94.33 95.05 96.51 98.08

Table 4Verification performance of the proposed integration of image fusion and matchscore fusion algorithms at 0.01% FAR.

Face database

Verification accuracy (%)

Proposed 2ν-GSVM image fusion Proposed DSm Integrated

2D log Local binary match score image and match

polar Gabor pattern fusion score fusion

Notre Dame 95.85 94.80 98.82 99.91

Equinox 94.98 94.71 98.08 99.54

by at least 1.08% compared to only image fusion or only match score fusion.High verification accuracies (> 99.5%) on the Notre Dame and Equinox facedatabases show the robustness of the proposed integrated fusion algorithm tothe variations in illumination, expression, and occlusion due to glasses. Thecomputational time of the proposed integrated image and match score fusionalgorithms including image registration, feature extraction and matching is4.3 seconds on a P-IV, 3.2 GHz computer under MATLAB environment.

7.4 Statistical Evaluation of Proposed Image Fusion and Match Score FusionAlgorithms

The performance of a biometric system greatly depends on the database sizeand the images present in the database [46]. It cannot be represented com-pletely by ROC plots and verification accuracy. To systematically evaluate theperformance, Bengio and Mariethoz [47] have proposed statistical test usinghalf total error rate and confidence intervals [47]. In this section, we performstatistical evaluation on the proposed image fusion and match score fusionalgorithms using HTER. Half total error rate is defined as,

HTER =FAR+ FRR

2(29)

23

Page 24: Integrated Multilevel Image Fusion and Match Score …richa/papers/GSVM-DSm.pdf · Integrated Multilevel Image Fusion and ... mal face recognition using several face recognition

(a)

(b)

0 1 2 3 4 50

1

2

3

4

5

6

7

False Accept Rate (%)

Fals

e R

eje

ction R

ate

(%

)

Product

Sum

SVM

DST

DSm

0 1 2 3 4 5 60

1

2

3

4

5

False Accept Rate (%)

Fals

e R

eje

ction R

ate

(%

)

Product

Sum

SVM

DST

DSm

Fig. 10. ROC plots of the proposed DSm match score fusion and existing matchscore fusion algorithms on the (a) Notre Dame face database [18] (b) Equinox facedatabase [19].

Confidence intervals are computed around HTER as HTER± σ ·Zα/2. σ andZα/2 are computed using Equations 30 and 31 [47].

σ =

√FAR(1 − FAR)

4 ·NI +FRR(1 − FRR)

4 ·NG (30)

24

Page 25: Integrated Multilevel Image Fusion and Match Score …richa/papers/GSVM-DSm.pdf · Integrated Multilevel Image Fusion and ... mal face recognition using several face recognition

Table 5Confidence interval around half total error rate of the proposed 2ν-GSVM imagefusion, DSm match score fusion, and integrated multilevel fusion algorithms.

Face database

Confidence interval (%)

Fusion HTER (%) around HTER for

Algorithms 90 % 95 % 99 %

Notre Dame

Image fusion with 2.08 0.68 0.81 1.07

2D log polar Gabor

Image fusion with 2.61 0.76 0.91 1.19

local binary pattern

Match score fusion 0.59 0.37 0.44 0.58

Integrated image fusion 0.05 0.10 0.12 0.16

and match score fusion

Equinox

Image fusion with 2.52 0.26 0.31 0.41

2D log polar Gabor

Image fusion with 2.65 0.27 0.32 0.42

local binary pattern

Match score fusion 0.97 0.17 0.20 0.26

Integrated image fusion 0.24 0.08 0.10 0.13

and match score fusion

Zα/2 =

1.645 for 90% CI

1.960 for 95% CI

2.576 for 99% CI

(31)

NG is the total number of genuine scores and NI is the total number ofimpostor scores.

Table 5 summarizes the results of this statistical evaluation using false acceptand false reject rates. We have computed these statistical values at 0.01% FAR.This statistical test shows that on a database similar to the Notre Dame withany number of classes, HTER of the proposed integrated fusion algorithm willlie between 0.05±0.06 with 95% confidence. Similar results have been obtainedwith the Equinox face database.

25

Page 26: Integrated Multilevel Image Fusion and Match Score …richa/papers/GSVM-DSm.pdf · Integrated Multilevel Image Fusion and ... mal face recognition using several face recognition

8 Conclusion

Visible and long wave infrared images provide complementary properties whichcan be combined to improve the performance of face recognition. In this pa-per, we proposed image fusion and match score fusion algorithms to fuse in-formation obtained from multispectral face images. We first apply mutualinformation based registration algorithm to register multispectral face imagesand then fuse the images using the proposed 2ν-Granular Support Vector Ma-chine. The fused image contains the properties of both visible and long waveinfrared images and can efficiently be used for face recognition. We next pro-posed the DSm match score fusion algorithm to fuse match scores generatedfrom multiple classifiers. These match scores can be generated by applyingmultiple classifiers on one image or by applying classifiers on multispectralface images. The proposed match score fusion algorithm is based on the the-ory of evidence and performs efficiently even when visible and long wave in-frared images provide conflicting decisions. The proposed image fusion andmatch score fusion algorithms are then integrated to further improve the facerecognition performance. We validated the performance of the proposed im-age fusion algorithm, match score fusion algorithm, and the integrated imagefusion and match score fusion algorithms using the Notre Dame and Equinoxface databases. Experimental results show that the proposed image fusion andmatch score fusion algorithms outperform existing fusion algorithms. Resultsand statistical evaluation further show that the proposed integrated image andmatch score fusion algorithm yields best performance among all the proposedand existing fusion algorithms.

9 Acknowledgment

The authors would like to thank CVRL University of Notre Dame and EquinoxCorporation for providing the face databases used in this research.

References

[1] W. Zhao, R. Chellappa, A. Rosenfeld, and P.J. Phillips, Face recognition: aliterature survey, ACM Computing Surveys 2003 12(4) 399-458.

[2] X. Chen, P. Flynn, and K. Bowyer, IR and visible light face recognition,Computer Vision and Image Understanding 99(3) 2005 332-358.

[3] S. Kong, J. Heo, B. Abidi, J. Paik, and M. Abidi, Recent advances in visualand infrared face recognition - a review, Journal of Computer Vision and Image

26

Page 27: Integrated Multilevel Image Fusion and Match Score …richa/papers/GSVM-DSm.pdf · Integrated Multilevel Image Fusion and ... mal face recognition using several face recognition

Understanding 97(1) 2005 103-135.

[4] S.G. Kong, J. Heo, F. Boughorbel, Y. Zheng, B.R. Abidi, A. Koschan, M.Yi, andM.A. Abidi, Multiscale fusion of visible and thermal IR images for illumination-invariant face recognition, International Journal of Computer Vision 71(2)2007 215-233.

[5] F. Prokoski, History, current status, and future of infrared identification,Proceedings of IEEE Workshop on Computer Vision beyond the VisibleSpectrum: Methods and Applications 2000 5-14.

[6] D. Socolinsky, A. Selinger, and J. Neuheisel, Face recognition with visibleand thermal infrared imagery, Journal of Computer Vision and ImageUnderstanding 91 2003 72-114.

[7] J. Wilder, J. Phillips, C. Jiang, and S. Wiener, Comparison of visible and infra-red imagery for face recognition, Proceedings of International Conference onAutomatic Face and Gesture Recognition 1996 182-187.

[8] A. Gyaourova, G. Bebis, and I. Pavlidis, Fusion of infrared and visible imagesfor face recognition, Lecture Notes in Computer Science 3024 2004 456-468.

[9] J. Heo, S. Kong, B. Abidi, and M. Abidi, Fusion of visual and thermalsignatures with eyeglass removal for robust face recognition,Proceedings of IEEEWorkshop on Object Tracking and Classification Beyond the Visible Spectrumin conjunction with CVPR 2004 94-99.

[10] R. Singh, M. Vatsa, and A. Noore, Hierarchical fusion of multi-spectral faceimages for improved recognition performance, Information Fusion 2007 (InPress).

[11] S. Singh, A. Gyaourova, G. Bebis, and I. Pavlidis, Infrared and visibleimage fusion for face recognition, Proceedings of SPIE Defense and SecuritySymposium (Biometric Technology for Human Identification) 5404 2004 585-596.

[12] R. Singh, M. Vatsa, and A. Noore, Intelligent biometric information fusion usingsupport vector machine, Soft Computing in Image Processing: Recent Advancesby Springer edited by M. Nachtegael, D. Van der Weken, E. E. Kerre, and W.Philips Chapter 12, 2006 327-350.

[13] R. Singh, M. Vatsa, and A. Noore, Face recognition with disguise and singlegallery images, Image and Vision Computing 2007 (In press).

[14] T. Ahonen, A. Hadid, and M. Pietikinen, Face description with local binarypatterns: application to face recognition, IEEE Transactions on PatternAnalysis and Machine Intelligence 28(12) 2006 2037-2041.

[15] J. Dezert, Foundations for a new theory of a plausible and paradoxicalreasoning, Information and Security Journal 9 2002 13-57.

[16] F. Smarandache and J. Dezert, Advances and applications of DSmT forinformation fusion, American Research Press 2004.

27

Page 28: Integrated Multilevel Image Fusion and Match Score …richa/papers/GSVM-DSm.pdf · Integrated Multilevel Image Fusion and ... mal face recognition using several face recognition

[17] X. Chen, P.J. Flynn, and K.W. Bowyer, Visible -light and infrared facerecognition, Proceedings of ACM Workshop on Multimodal User Authentication2003 48-55.

[18] P.J. Flynn, K.W. Bowyer, and P.J. Phillips, Assessment of time dependency inface recognition: an initial study, Proceedings of International Conference onAudio and Video-Based Biometric Person Authentication 2003 44-51.

[19] Equinox face database: http://www.equinoxsensors.com/products/HID.html.

[20] V.N. Vapnik, The nature of statistical learning theory, Springer 1995.

[21] P.-H. Chen, C.-J. Lin, and B. Schlkopf, A tutorial on ν-support vector machines,Applied Stochastic Models in Business and Industry 21 2005 111-136.

[22] H.G. Chew, C.C. Lim, and R.E. Bogner, An implementation of training dual-νsupport vector machines, Optimization and Control with Applications by Kluweredited by Qi, Teo, and Yang 2004.

[23] Y.C. Tang, B. Jin and Y.-Q. Zhang, Granular support vector machines withassociation rules mining for protein homology prediction, Special Issue onComputational Intelligence Techniques in Bioinformatics, Artificial Intelligencein Medicine 35(1-2) 2005 121-134.

[24] Y.C. Tang and Y.-Q. Zhang, Granular support vector machines with datacleaning for fast and accurate biomedical binary classification, Proceedings ofInternational Conference on Granular Computing 2005 262-265.

[25] Y.C. Tang, B. Jin, Y.-Q. Zhang, H. Fang, and B. Wang, Granular support vectormachines using linear decision hyperplanes for fast medical binary classification,Proceedings of International Conference on Fuzzy Systems 2005 138-142.

[26] Y.C. Tang and Y.-Q. Zhang, Granular SVM with repetitive undersampling forhighly imbalanced protein homology prediction, Proceedings of InternationalConference on Granular Computing 2006 457-461.

[27] A. Bargiela and W. Pedrycz, Granular computing: an introduction, KluwerInternational Series in Engineering and Computer Science 2002.

[28] A. Bargiela and W. Pedrycz, The roots of granular computing, Proceedings ofIEEE International Conference on Granular Computing 2006 806-809.

[29] Y.H. Chen and Y.Y. Yao, Multiview intelligent data analysis based ongranular computing, Proceedings of IEEE International Conference on GranularComputing 2006 281-286.

[30] Y.Y. Yao, Perspectives of granular computing, Proceedings of IEEEInternational Conference on Granular Computing 1 2005 85-90.

[31] G. Bebis, A. Gyaourova, S. Singh, and I. Pavlidis, Face recognition by fusingthermal infrared and visible imagery, Image and Vision Computing 24(7) 2006727-742.

28

Page 29: Integrated Multilevel Image Fusion and Match Score …richa/papers/GSVM-DSm.pdf · Integrated Multilevel Image Fusion and ... mal face recognition using several face recognition

[32] F. Maes, A. Collignon, D. Vandermeulen, G. Marchal, and P. Suetens,Multimodality image registration by maximization of mutual information, IEEETransactions on Medical Imaging 16(2) 1997 187-198.

[33] J.P. Pluim, J.B.A. Maintz, and M.A. Viergever, Mutual information-basedregistration of medical images: a survey, IEEE Transactions on Medical Imaging22(8) 2003 986-1004.

[34] D. Hill, C. Studholme, and D. Hawkes, Voxel similarity measures for automatedimage registration, Proceedings of the Third SPIE Conference on Visualizationin Biomedical Computing 1994 205-216.

[35] M. Antonini, M. Barlaud, P. Mathieu, and I. Daubechies, Image coding usingthe wavelet transform, IEEE Transaction on Image Processing 1(2) 1992 205-220.

[36] A. Ross and A.K. Jain, Information fusion in biometrics, Pattern RecognitionLetters 24(13) 2003 2115-2125.

[37] J.F. Aguilar, J.O. Garcia, J.G. Rodriguez, and J. Bigun, Kernel-basedmultimodal biometric verification using quality signals, Proceedings of SPIEBiometric Technology for Human Identification 5404 2004 544-554.

[38] G. Shafer, A mathematical theory of evidence, Princeton University Press 1976.

[39] R. Singh, M. Vatsa, A. Noore, and S.K. Singh, Dempster shafer theory basedclassifier fusion for improved fingerprint verification performance, Proceedingsof Indian Conference on Computer Vision, Graphics and Image ProcessingLNCS 4338 2006 941-949.

[40] L. Zadeh, On the validity of Dempsters rule of combination, Memo M 79/24University of California, Berkeley, 1979.

[41] L. Zadeh, Review of mathematical theory of evidence by Glenn Shafer, AIMagazine 5(3) 1984 81-83.

[42] L. Zadeh, A simple view of the Dempster-Shafer theory of evidence and itsimplication for the rule of combination, AI Magazine 7(2) 1986 85-90.

[43] D. Dubois and H. Prade, On the unicity of Dempster rule of combination,International Journal of Intelligent Systems 1 1986 133-142.

[44] F. Voorbraak, On the justification of Dempsters rule of combination, ArtificialIntelligence 48 1991 171-197.

[45] S.K. Singh, D.S. Chauhan, M. Vatsa, and R. Singh, A robust skin color basedface detection algorithm, Tamkang Journal of Science and Engineering 6(4)2003 227-234.

[46] R.M. Bolle, N.K. Ratha, and S. Pankanti, Performance evaluation in 1:1biometric engines, Proceedings of Sinobiometrics 2004 27-46.

[47] S. Bengio and J. Mariethoz, A statistical significance test for personauthentication, Proceedings of Odyssey: The Speaker and Language RecognitionWorkshop 2004 237-244.

29