Upload
others
View
5
Download
0
Embed Size (px)
Citation preview
75
CHAPTER 3
PROPOSED PALMPRINT RECOGNITION SYSTEM
This chapter describes the algorithm used for personal
identification based on features extracted from the palmprint. First Local
Gabor XOR (LGXP) features are extracted from the palmprint using Gabor
filter with single orientation. The algorithm is then modified, where features
are extracted with different orientations of the Gabor filter called the Multiple
Orientation LGXP (MOLGXP) features. Next PCA feature is extracted and
the minimum matching score from the individual matchers of MOLGXP and
PCA are fused using sum rule. The performance of the proposed algorithms is
tested on the PolyU database provided by the Hong Kong Polytechnic
University. The block diagram of the proposed palmprint recognition system
is shown in Figure 3.1. The different blocks are explained in the subsections.
Figure 3.1 Block diagram of the proposed Palmprint Recognition system
76
3.1 PREPROCESSING
The first step in palmprint based identification system is
preprocessing. During image capture the position, direction and the degree of
stretching of the palm may vary. Hence the palmprint from the same palms
may be subjected to slight rotation and translation. Thus the aim of this step is
to align different palmprint and extract the central palm area for feature
extraction and to eliminate variations caused by rotation and translation
(Zhang & Wang 2003). In the PolyU database, the positioning of the hand on
the scanner bed is guided by the presence of pegs and hence the acquired
palmprint is invariant to translation and rotation (Zhang 2004). Thus it is
sufficient to define a coordinate system for the extraction of the central palm
areas. Before extracting the desired palm area, each palmprint image in the
database is filtered using Median filter. The median filter is useful for
reducing speckle noise, salt and pepper noise. The median value is actually
one of the pixel values in the neighbourhood and hence no new pixel values
are created. This property of the median filter is particularly useful in
preserving the edges and hence serves to enhance palmprint images. The
palmprint image from the PolyU database and filtered images are shown in
Figure3.2.
Figure 3.2 a) Original images from PolyU database b) Filtered images
(a)
(b)
77
After enhancing the palmprint image, the desired area called the
Region of Interest (ROI) is to be obtained. The ROI extraction must be done
carefully to avoid interclass variations. The valley between the fingers being
stable is used to establish a coordinate system from which the ROI is
extracted. The reference points are determined and the line passing through
the reference points forms the Y-axis. The horizontal line perpendicular to
Y-axis represented by the black line is the X-axis. The point of intersection of
the X-axis and Y-axis is the midpoint of the reference points A and B. After
reference points are located on rotated palm images, the next step is to extract
the central palm area, which is the square region shown in Figure 3.3.
Figure 3.3 Palmprint ROI extraction
Let the reference point A be denoted by ( , ) and point B
by ( , ). Let ( , ) be the midpoint of the reference points A and B. It
is given by
= = (3.1)
= (3.2)
The extracted square palm area has a length of pixels along the
horizontal and vertical direction. The perpendicular distance between the Y
78
axis and the upper vertical side of the rectangular region is d pixels. Let
111 , yxS denote the coordinates of the upper left corner of the square region,
and similarly, 222 , yxS denote the coordinates of the lower left corner of the
square region,
= + (3.3)
= + /2 (3.4)
= + (3.5)
= /2 (3.6)
Next the right upper corner and the lower right corner coordinates
may be determined as follows:
= + (3.7)
= (3.8)
= + (3.9)
= (3.10)
Thus the coordinates of the ROI has been determined and the
desired central palm area is extracted. The original palm image and the ROI is
shown in Figure 3.4. The size of the palmprint image available in the PolyU
database is 384×284 and the extracted ROI is of size 120×120.
79
Figure 3.4 Central Palm areas Extraction on PolyU database images a) Original images b) Extracted ROI
3.2 FEATURE EXTRACTION
Feature extraction plays an important role in image identification
and verification. A palmprint which may be defined as the skin patterns of a
palm consists of the physical characteristics such as lines, points and texture.
Features like principal lines, wrinkles and texture can be extracted from low
resolution images but delta point and minutiae features can be extracted from
high resolution images only. Palmprint capturing devices with high resolution
are costly and these being a disadvantage, in the proposed system low
resolution images are used. Many algorithms have been developed by
researchers to extract the principal lines but such systems do not provide high
accuracy because different individuals may possess similar line features
(Zhang et al 2003). Also extracting the wrinkles exactly is a difficult task and
hence 2D Gabor filtering is used to extract the texture features from low
resolution palmprint images.
(a)
(b)
80
3.3 TEXTURE FEATURE
Texture is one of the important attribute that has been used by the
human visual system in identification of objects. Texture has been used in
many image processing and computer vision applications like segmentation,
classification and shape from texture. The texture patterns can be easily
identified by the humans but it is very difficult to define exactly. No specific
definition is found in the literature but different definitions are provided based
on applications or visual perception. Clark et al (1987) has defined texture as
“a spatial arrangement of local (gray-level) intensity attributes which are
correlated in some way within areas of the visual scene corresponding to
surface regions. According to Tamura et al (1978) it is defined as a repetitive
pattern in which elements or primitives are arranged according to a placement
rule. This visual repetitive pattern makes it easy for human visual system to
identify the texture patterns
Various approaches have been used by researchers for texture
analysis but psychophysiology studies show that the brain performs
multichannel frequency and orientation analysis of the visual image that is
formed on the retina. Campbell & Robson (1968) based on experimental
studies, showed that that the human visual system decomposes an image into
filtered images of different frequencies and orientation. Thus researchers have
been motivated to use multichannel filtering for texture analysis.
3.3.1 Gabor Filter
A Gabor filter has been widely used by researchers for the texture
analysis (Vyas & Rege 2006, Clausi & Jernigan 2000), face recognition
(Sharif et al 2011, Jin & Ruan 2009), iris recognition (Avila & Reill 2005,
Tsai et al 2009) and in fingerprint recognition (Lee & Wang 1999, Yang et al
2003) systems. The one dimensional Gabor filter was first introduced by
81
Dennis Gabor (1946) and later it was extended for two dimensional signals by
Daugman (1980). Gabor filters provide advantages like 1) It can capture the
local information governed by the uncertainty principle. 2) It provides
robustness against varying brightness and contrast images. 3) It can be used to
model the receptive fields of the mammalian simple cells in the primary
visual cortex. According to Daugman (1993) Circular 2D Gabor filter can be
effectively used to extract texture information from images and is represented as
( , , , , ) = {2 ( + )} (3.11)
where = 1 ,u is the frequency of the sinusoidal wave, is the orientation
of the function and is the standard deviation of the Gaussian envelope. Such
Gabor filters have been widely used in various applications like fingerprint recognition, face recognition, and texture analysis.
The Gabor filter ( , , , , ) forms the complex valued function.
Decomposing ( , , , , ) into real and imaginary parts gives
( , , , , ) = ( , , , , ) + ( , , , , ) (3.12)
where ( , , , , ) and ( , , , , ) represent the real and imaginary
parts of the Gabor filter. In order to provide more robustness to brightness
variations, a zero mean Gabor filter is necessary. The mean value of the
imaginary part of the Gabor filter is automatically zero because of the odd
symmetry of the sine function but the mean of the real part of the filter is not
zero because of the even symmetry of the cosine function. A zero mean Gabor filter is obtained using the formula given below
( , , , , ) = ( , , , , )( , , , , )
( ) (3.13)
82
where (2 + 1) represents the size of the filter.
The filtered image provides two types of information which can be
used separately or may be combined to provide the feature. They include
magnitude ( , ) and phase (x, y) which are given by the equation as
mentioned below (Kong et al 2006).
( , ) = ( , ) × ( , ) (3.14)
(x, y) = tan ( , ( , )( , ) ( , )
(3.15)
where “*” represents the convolution operation, “____”represents complex
conjugate and ( , ) the extracted palmprint image.
3.3.2 Local Gabor Exclusive OR Patterns (LGXP)
To determine the LGXP features, the phase value as given by
above equation is computed for each pixel of the filtered image. Next the
image is divided into 3×3 sub images and phase values are quantized based
on the range of the phase value. Now for each sub image the quantized phase
of the central pixel is compared with each of its neighbouring pixels and XOR
operation is applied. If the central pixel and neighbouring pixel are different,
then the neighbouring pixel is replaced by a binary 1 and if they are same
then the neighbouring pixel is replaced by a binary value 0. Finally the
resulting binary labels are concatenated together as the Local XOR Pattern
(LXP) of the central pixel. The steps involved in determining the LGXP
pattern is explained below.
As a first step the phase values in a sub image of size 3×3 are
quantized and coded based on the following rule
83
( , ) = 0 0 ( , ) < (3.16)
( , ) = 1 ( , ) < (3.17)
( , ) = 2 ( , ) < (3.18)
( , ) = 3 ( , ) < 2 (3.19)
Next, the pattern of LGXP in binary and decimal form is defined as follows
( ) = , , … … … . (3.20)
= [ 2 . ] (3.21)
where denotes the central pixel position in the Gabor phase map with S
being the size of neighbourhood and ( = 1,2, … … . ) denotes the
pattern calculated between and its neighbour which is computed as
follows
= ( ) q Z , j = 1,2 … … S (3.22)
Where )•(q denotes the coded phase value of the pixel which is
equal to 0,1,2 and 3 , denotes the LXP operator, which is based on XOR
operator, as defined in equation given below
( ) q Z = 01
( ) (3.23)
The pattern map described above for a 3×3 sub image is calculated
for the filtered image as
84
= [ , … … … ] (3.24)
where i=1, 2…..n denotes the number of sub images in the filtered image The
encoding process is shown in Figure 3.5.
Figure 3.5 Encoding method of LGXP
3.4 PRINCIPAL COMPONENT ANALYSIS (PCA)
The Principal Component Analysis (PCA) also known as the
Karhunen-Loeve transform is a classical statistical technique used in the
biometric systems like face recognition (Chan et al 2010, Turk & Pentland
1991), iris recognition (Cui et al 2004, Patil et al 2012), and character
recognition (Mane & Ragha 2009, Zuo et al 2002). The main aim of PCA is
to reduce the dimensions of the data so the features extracted could be
represented in reduced dimensional space. In PCA the data is projected into
an orthogonal subspace so that a number of correlated components can be
transformed into a smaller number of uncorrelated components. The first
principal component captures the variance of the data in a particular direction
and other principal components capture the remaining variability.
3.4.1 PCA Based Feature Extraction
In this section the feature extraction from the preprocessed image is
carried out using principal component analysis. Each preprocessed palmprint
85
image of size N×N is represented as 1× N2 dimensional vector where each
row of pixels is concatenated so as to form a one dimensional vector. Let the
training samples be represented as ( , … ).where K is the total number
of training samples. The mean vector of the training samples and the
deviations from the mean vector are computed using the following relations
= (3.25)
= (3.26)
The Covariance matrix is next computed using the relation (3.27)
= ( )( ) = (3.27)
Where the matrix = { , … … . } .Next the Eigen vectors
of C is computed and m largest Eigen values are selected
= (3.28)
The Eigen vector corresponding to m largest Eigen values is
normalized and is given by the following relation
= (3.29)
The set of the Eigen vectors is composed of optimal linear
transformation matrix . The preprocessed palmprint image is then
transformed into the feature space by the following relation
= ( ) (3.30)
86
3.5 RECOGNITION PROCESS
In this phase, a test image of the palmprint is taken and its feature
vector is computed by applying the steps described above. Then, the matching
score is calculated by comparing the feature vectors of the test palm image
with the feature vectors of the palm images available in the database using an
appropriate distance metric. Here, Euclidean distance measure is used to
compute the matching score. The following steps are involved in the
matching phase. The distance can be calculated by using the following
equation
= 1/ ( ) (3.31)
where = ( , , … ) = ( , , … )) represents the feature
vector of the test image and feature vector of the image in the database. After
the distance is computed the query palmprint image can be recognized by
using the thresholding technique which is described in the following pseudo
code.
Thresh t; // t is the average distance value
Ifmin(dist)< ThreshThen
Assign ismatch is True;
Else
Assign ismatch is False;
End if
The ismatch term is true then the image is recognized, otherwise it
is not recognized. The Thresh value is based on the matching scores
generated when the test sample is compared with the database sample.
87
3.6 EXPERIMENTAL RESULTS
Three experiments were conducted and the identification results are
compared. Testing hand image sources are taken from Hong Kong
polytechnic university (PolyU) palmprint database. The PolyU palmprint
database contains 7752 grayscale images corresponding to 386 different
palms in BMP image format. Around twenty samples from each of these
palms were collected in two sessions where around 10 samples were captured
in the first and the second sessions, respectively. The palmprint images in the
database are labeled as "PolyU_xxx_L_NN.bmp", where the "xxx" is the
unique palm identifier (range from 001 to 386), "L" is the index of the first or
the second session ('F' indicates the first session while 'S' indicates the second
session) and "NN" is the index of each palm (range from 1 to 10).
Experiment Results for LGXP Feature: In this phase the LGXP
feature is computed with the parameters frequency =0.90, the space
constant = = =5.5 and orientation = 30 . The selection of the
parameters is based on the results provided by Kumar &Zhang (2005) and
Zhang et al (2003). A total of twelve samples are taken for each person from
the hand images captured during the first and second sessions in the PolyU
database. Out of these four samples are used during the training phase and
remaining samples are used in the testing phase. The LGXP parameters are
computed for the training samples and stored in the database. The total
number of training samples is 600 where four samples are trained for 150
persons. During the testing phase, for each test sample LGXP feature is
computed and this is compared with the LGXP feature templates stored in the
database using the Euclidean distance measure. If this distance measure is less
than the threshold, then test image is considered to be genuine, otherwise it is
an impostor. The threshold values are varied and for each of this value the
False acceptance Rate(FAR),False Rejection rate(FRR) and Genuine
88
Acceptance rate(GAR) are calculated using the equations (2.1),(2.2),(2.3).The
outputs obtained for the LGXP feature are shown in Figure3.6.
(a) (b)
(c) (d)
(e) (f)
Figure 3.6 (a) Original PolyU palmprint image (b) Cropped image (c) Real part (d) Imaginary part (e) Magnitude part (f) Phase part at = 300
The Table 3.1 shows the experimental values obtained. The
threshold value is varied in steps of 0.1 from 0 to 1. The matching score
generated from the matcher is a genuine score if both the test sample and the
database template belong to the same person, otherwise it is an impostor
score. The FAR and FRR values are computed at each threshold value based
89
on the number of genuine and impostor scores generated. From the Table 3.1,
it is observed that as the threshold value is increased the number of genuine
person rejected decreases whereas the number of impostors accepted by the
system increases. Hence it is observed that the false rejection rate decreases
and false acceptance rate increases as the threshold is increased.
Table 3.1 Error rates and Recognition rate of LGXP based Palmprint Recognition system
Threshold value FRR% FAR% GAR%
0 100 0 00.20 100 0 00.30 50 0 50.000.4 20 0 80.00
0.47 7.12 0 92.880.48 5.86 0.0065 94.140.49 4.62 0.0092 95.380.50 3.16 0.0380 96.840.51 2.06 0.2000 97.940.52 1.04 1.2200 98.960.53 0.33 2.1300 99.670.54 0.06 3.2800 100.000.55 0 4.6900 100.000.6 0 20.000 100.000.7 0 100.00 100.00
Also it is observed that for lower values of threshold, the rejection
rate is very high and system cannot be useful as a suitable recognition system.
For threshold values between 0.47 and 0.55, the False Rejection Rate and
False Acceptance Rate are in a tolerable range. A plot of FRR and FAR
against threshold between 0 and 0.8 is shown in Figure 3.7(a) and
90
Figure 3.7(b) shows the plot of FRR and FAR against threshold values
between 0.47 and 0.55.
(a)
(b)
Figure 3.7 Plot of FRR%, FAR% against threshold of LGXP based Palmprint Recognition system (a) For threshold values between 0 and 0.8 (b) For threshold values between 0.47 and 0.8
Experiment Results for MOLGXP Feature: In this phase the
LGXP features are obtained for six different orientations namely
= 30 , 60 , 90 , 120 , 150 180 . The features thus obtained for
91
various values of are concatenated to form the total feature vector. The real
and imaginary parts obtained for different orientations are shown in Figure
3.8 and the amplitude and phase part in Figure 3.9.
Figure 3.8 (a) Real and (b) imaginary parts for different orientations
(a)
(b)
92
Figure 3.9 (a) Amplitude and (b) Phase for different orientations
The values of FAR, FRR and GAR are shown in Table 3.2 and plot
of FRR and FAR against threshold is shown in Figure 3.10.
(a)
(b)
93
Table 3.2 Error rates and Recognition rate of MOLGXP based Palmprint Recognition system
Threshold value
FRR% FAR% GAR%
0.45 4.27 0 95.73
0.46 3.50 0.00022 96.50
0.47 2.62 0.00125 97.38
0.48 1.65 0.0090 98.35
0.49 1.14 0.01400 98.86
0.50 0.67 0.0800 99.33
0.51 0.17 0.1900 99.83
0.52 0.04 0.4020 99.96
0.53 0 0.7190 100.00
Figure 3.10 Plot of FRR%, FAR% against threshold of MOLGXP based Palmprint Recognition system
94
Experiment Results for MOLGXP+PCA Feature: In this phase
the Gabor filter with six orientations are used and MOLGXP features are
extracted. In addition to the MOLGXP feature, the PCA features are also
extracted. Euclidean distance is used to match the PCA features. For a given
test image the two features are extracted and minimum matching scores are
determined for both of the features. The matching scores are then combined
using the sum rule. Let represent the matching score from the MOLGXP
matcher and from the PCA matcher. The combined average score using
the sum rule (Ross 2006) is given by
= ( + ) (3.32)
The values of FAR, FRR and GAR are shown in Table 3.3 and plot
of FRR and FAR against threshold is shown in Figure 3.11.
Table 3.3 Error rates and Recognition rate of MOLGXP+ PCA based Palmprint Recognition system
Threshold value
FRR% FAR% GAR%
0.45 2.83 0 97.17
0.46 1.75 0 98.25
0.47 1.02 0.00012 98.98
0.48 0.62 0.00800 99.38
0.49 0.42 0.04300 99.58
0.50 0.29 0.11400 99.71
0.51 0.12 0.13800 99.88
0.52 0.02 0.25000 99.98
0.53 0 0.35800 100.00
0.54 0 0.45600 100.00
95
Figure 3.11 Plot of FRR%, FAR% against threshold of MOLGXP+ PCA based Palmprint Recognition system
The Figure 3.12 shows the Error Trade off Curves, as a plot of the
False Rejection Rate against False Acceptance Rate obtained at different
threshold values for LGXP, MOLGXP and MOLGXP+ PCA feature
extraction based Palmprint Recognition system as explained above. This
graph is useful in comparing the performance of different biometric systems.
From the graph it is observed that the error rates are more for LGXP feature
in comparison with the error rates for MOLGXP feature. The error rates are
further reduced for the fusion method which combines the scores from the
matchers of MOLGXP and PCA using sum rule for identification.
96
Figure 3.12 Error Trade off Curves for Palmprint Recognition systems
3.7 RESEARCH CONTRIBUTIONS
In this chapter the palmprint recognition system is implemented
and the research contributions are as follows
In palmprint recognition, the preprocessing stage involves
segmenting a specific portion from the central palm area that
is invariant to rotation. The stable keypoints are used for
extracting the ROI.
Most of the existing works that make use of Gabor filters for
feature extraction varied the factors like orientation, scale and
frequency. In this work, the Gabor filter is designed for
different orientation only keeping the other factors constant.
This reduces the number of computations but effectively
extracts the texture feature features.
97
The existing works make use of the magnitude response of
Gabor filter for feature encoding which is dependent on the
contrast values of the image pixels. This deteriorates the
performance of the biometric system. In the proposed work
only the phase values are used for encoding the feature which
is independent of the contrast.
The encoding technique is performed in two steps,
quantization and then coding. The quantization step serves to
make the encoding process more robust to phase changes
introduced by rotation of the palm during image acquisition.
Also the phase values in a 3×3 subimage are coded as a eight
bit code. In the existing works each phase value is encoded as
a two bit binary code which increases the storage
requirements.
The global features are extracted using Principal Component
Analysis (PCA).This serves to provide more discriminant
information than Fourier descriptors which make use of fixed
basis functions.
Extraction of both local and global features and fusion using
sum rule improves the performance. Also the matching is
performed based on distance measure which does not require
expensive training when making use of classifiers.
Advantages and Disadvantages: The advantages and the disadvantages of
the proposed palmprint recognition system is given below
The use of Gabor filter has an advantage of being more robust
to varying brightness and contrast of the images.
98
The recognition rate is improved by using both local and
global information for feature representation.
The encoding method is more robust to rotational changes and
uses less number of bits for storing the feature vector.
The performance of the proposed Palmprint Recognition
system is better in terms of recognition rate and error rates.
The disadvantage is the added quantization step used in the
encoding process but this too serves to capture stable features.
3.8 COMPARATIVE ANALYSIS
In this section the performance of the proposed work is compared
with recent existing works that have made use of Gabor filter and PCA in
terms of recognition rate as most of the existing works have used the same to
evaluate the performance of the biometric systems.
From the Table 3.4 it is found that Lu et al (2009) have made use of
Gabor wavelets with eight orientations and five scales. Thus a total of
40(8×5) images are obtained. Both global and local covariance matrix of
Gabor magnitude and Gabor phase is stored as feature vector and sum rule is
used feature representation. A recognition rate of 98% is achieved
Ribaric & Marcetic (2012) have used Gabor filter to capture three
set of feature vectors for by varying parameters like orientations, frequency of
the sinusoid and standard deviation of the Gaussian envelope. Four different
orientations were used and the output from the Gabor filter consists of
12(4×3) images for the three spectral components. Also both real part and
imaginary parts of the outputs are encoded to represent the feature vector and
weighted sum rule is used for fusion.
99
Similarly Xu et al (2012) has made use of multispectral palmprint
images of four different wavelengths red, blue green and NIR. DWT
coefficients and PCA are used for feature representation and weighted sum
rule for fusion.
From the above discussion it is observed that the existing methods
requires more number of computations, increased memory requirements and
also more features are extracted in comparison with the proposed method.
Also the recognition rates of these techniques are less compared with the
proposed method. Thus the proposed palmprint recognition method is
efficient.
Table 3.4 Comparison of proposed Palmprint Recognition system with Recent works
Author Feature Extracted Fusion Rule
ClassifierRecognition
Rate % Lu et al (2009)
Gabor magnitude and Gabor phase
Sum ruleEigen value based distance
98.00
Ribaric & Marcetic(2012)
Gabor feature of three spectral components (R,G,B)
Weighted Sum rule
Hamming distance 98.71
Xu et al (2012)
QPCA and QDWT Weighted Sum rule
Euclideandistance
98.83
Proposedmethod
MOLGXP +PCA Sum rule Euclidean distance
98.98
From the Table 3.4 it is observed that the proposed method
provides higher Recognition Rate in comparison the existing methods
discussed which have used more feature and involves more computations.
The reason for the improved performance is attributed by i) improved
preprocessing technique that uses stable keypoints for ROI extraction
100
ii) additional quantization step used in the encoding process which makes the
generated feature vector code robust to changes in phase values caused by the
slight variations in palm movement.
3.9 SUMMARY
In this work, a Palmprint Recognition System based on LGXP and
Principal Component features is proposed. Different experiments have been
conducted. In the first case the LGXP feature for a single orientation is alone
considered. A recognition rate of 94.14%is achieved and it is improved to
96.5% when the MOLGXP feature is considered for six different orientations
with the features being concatenated. In the third case principal component
features are extracted and minimum matching scores from MOLGXP matcher
and PCA matcher are fused using the sum rule. This further improved the
recognition rate to 98.98%. Finally the comparative analysis is presented
where the proposed method is found to provide improved performance is
comparison with the existing techniques.