View
112
Download
0
Category
Tags:
Preview:
Citation preview
CHAPTER 1
OVERVIEW
1.1 INTRODUCTION:
Information is an essential element in decision making and the world is generating
increasing amounts of information in various forms with different degrees of complexity.
Hence, the need for improved information systems has become more conspicuous. One of
the major problems in the design of modern information systems is automatic pattern
recognition. A pattern is the description of an object. Pattern recognition can be defined
as the categorization of input data into identifiable classes via the extraction of significant
features or attributes of the data from a background of irrelevant detail. Recognition is
regarded as a basic attribute of human beings, as well as other living organisms. A human
being is a very sophisticated information system, partly because he possesses a superior
pattern recognition capability. There are so many pattern recognition systems and
biometrics is one of them, which gives the notion of life measure. Biometrics refers to a
broad range of technologies, systems, and applications that automate the identification or
verification of an individual based on his or her physiological or behavioral
characteristics. Physiological biometric is based on direct measurements of a part of the
human body at a point in time. The most common physiological biometrics involves
Fingerprint biometric system
Face biometric system
Iris biometric system
Retina biometric system
Palm biometric system
Wrist vein biometric system
Hand geometry biometric system
1
Behavioral biometrics is based on measurements and data derived from the
method by which a person carries out an action over an extended period of time. The
most common behavioral biometrics involves
Speech biometric system
Signature biometric system
Gait biometric system
Identifying an individual from his or her face is one of the most nonintrusive
modalities in biometrics. In fact, an automatic people identification system based
exclusively on fingerprints or on face recognition. The fingerprint verification is
trustworthy but it is intrusive and it can cause resistance to the users, depending on the
application. Face recognition is a compelling biometric because it is integral to everyday
human interaction and it has the following features.
Face recognition is the primary means for people recognizing one another so it is
natural.
Face recognition is nonintrusive.
Since the advent of photography, faces have been the primary method of
identification in passports and ID card systems.
Because optical imaging devices can easily capture faces, there are large legacy
databases that can provide template sources for facial recognition technology.
This includes police mug-shot, data-bases and television footage. Cameras can
acquire the biometric passively.
As such, facial recognition is more acceptable than other biometrics and is easy to
use. Face biometric system involves face detection and face recognition. Face detection is
the first step in automated face recognition. Face detection can be performed based on
several cues: skin color motion facial/head shape facial appearance or a combination of
these parameters. However this works considers only face recognition.
To make the process more authentic, we add palmprint identification in this project.
Palmprint identification is the measurement of palmprint features for recognizing the
identity of a user. Palmprint is universal, easy to capture and does not change much
2
across time. Palmprint biometric system does not requires specialized acquisition devices.
It is user-friendly and more acceptable by the public. Besides that, palmprint contains
different types of features, such as geometry features, line features, point features,
statistical features and texture features. The palmprint geometry features are insufficient
to identify individuals. This is because the palmprint geometry features such as palm size,
palm width and others for adults are relatively similar. The palmprint line features
include principal lines, wrinkles and ridges. Ridges are the fine lines of the palmprint. It
requires high-resolution image or inked palmprint image to obtain its features. Wrinkles
are the coarse line of the palmprint while the principle lines are major line that is
available on most of the palm (headline, lifeline & heart line). The separation of wrinkles
& principle line are difficult since some wrinkles might be as thick as principle lines. In
this work, peg-less right hand images different individuals were acquire. No special
lighting is used in this setup. The hand image is segmented and its key points are located.
The hand image is aligned and cropped according to the key points. The palmprint image
is enhanced and resized. Sequential modified Haar transform is applied to the resized
palmprint image to obtain Modified Haar Energy (MHE) feature. The sequential
modified Haar wavelet can maps the integer-valued signals onto integer-valued signals
without abandoning the property of perfect reconstruction. The MHE feature is compared
with the feature vectors stored in the database using Euclidean Distance. The accuracy of
the MHE feature and Haar energy feature under different decomposition levels and
combinations are compared. 94.3678 percent accuracy can be achieved using proposed
MHE feature.
1.2 AIM OF THE PROJECT:
The main goal of the project is to design a Matlab code for face recognition using
principal component analysis. It also includes palm identification using sequential
modified Haar wavelet energy to provide better authentication. The entire simulation is
done with the help of Matlab software.
1.3 METHODOLOGY:
3
BLOCK DIAGRAM:
Figure 1.1 Block Diagram.
In this project, we take different images of face and right hand palm of different
persons and load them in the database. Then an unknown face and right hand palm of a
person are captured and compared with the images in the database with their respective
algorithms. For face recognition, we use principal component analysis algorithm and for
palm, we use modified Haar transformation model. If the query images of face and palm
are same as the images in the database, then the person is authentic otherwise not.
1.4 SIGNIFICANCE OF THE WORK:
4
Input face image
Face detection
Face feature Extraction
Feature Matching Decision maker
Output result
Face and palm print image database
Input palm print image
Palm print detection
Palm print feature Extraction
Face or palm recognition plays an important role in today’s world. It has many real-
world applications like human/computer interface, surveillance, authentication and video
indexing. Humans often use faces to recognize individuals and advancements in
computing capability over the past few decades now enable similar recognitions
automatically. It is desirable to have a system that has the ability of learning to recognize
unknown faces. Face recognition has become an important issue in many applications
such as security systems, credit card verification and criminal identification. For example,
the ability to model a particular face and distinguish it from a large number of stored face
models would make it possible to vastly improve criminal identification. Face and palm
recognition depends heavily on the particular choice of features used by the classifier.
One usually starts with a given set of features and then attempts to derive an optimal
subset (under some criteria) of features leading to high classification performance with
the expectation that similar performance can also be displayed on future trials using novel
(unseen) test data.
1.5 ORGANIZATION OF THE REPORT:
The chapters are arranged in the following manner:
Chapter 1 deals with the introduction, aim of the project, methodology,
significance of the work and organization of the report.
Chapter 2 deals with biometric system vulnerabilities.
Chapter 3 deals with Face recognition using principal component analysis.
Chapter 4 deals with Palmprint Identification Using Sequential Modified Haar
Wavelet Energy.
Chapter 5 deals with Matlab software.
Chapter 6 deals with Results and Conclusion.
CHAPTER 2
5
BIOMETRIC SYSTEM VULNERABILITIES
2.1 INTRODUCTION:
All security measures, including mechanisms for authenticating identity have
ways of being circumvented. Certainly, the processes in working around these measures
vary difficulty based on effort and resources needed to carry out the deceptive
act. Authentication mechanisms based on secrets are particularly vulnerable to "guessing"
attacks. Token mechanisms that rely on the possession of an object, most notably a card
or badge technology are most vulnerable to theft or falsified reproduction. Biometric
technologies closely tie the authenticator to individual identity of the user through the use
of physiological or behavioral characteristics. While this property is an added advantage
over the previous two authentication mechanisms mentioned. It places a great emphasis
on validating the integrity of the biometric sample acquired and transferred in the
biometric system. Rather, N., et al. provided a model identifying vulnerabilities in
biometric systems. An example of the threat model is shown below in Figure 2.1 and
builds on the general biometric model outlined in Mansfield and Wayman.
Figure 2.1 Biometric Threat Model.
6
This model contains 11 individual areas at which vulnerabilities in biometric
systems exist. In addition to the five main internal modules that are characterized in the
General Biometric Model (data collection, signal processing, matching, storage and
decision), an additional component is added to represent the transfer of the authentication
decision to the greater application that relies on the decision from the biometric
system. Such applications could be identity management systems (IDMS) or access
control systems for logical and or physical access to resources. These systems can vary in
complexity and size ranging from a local computer log-in all the way to a wide scale
distributed architecture seen in the cases of the Transportation Worker Identification
Credential (TWIC) or Personal Identity Verification (PIV) of Federal Employees and
Contractors. The remaining points of vulnerability are communication channels between
these six modules. It is worth noting that not all 11 vulnerability points are unique to the
biometric system. Many of the same points such as storage and communication channels
are vulnerable in other authentication systems and similar methods can be used to limit
those particular vulnerabilities.
The most publicized vulnerability in biometric systems resides at the Data
Collection module in the form of spoofing or presentation artificial representations of
biometric samples (module #1 in Figure 2.1). If an artificial or fake biometric sample is
accepted by the biometric system at this initial stage, the entire biometric system is
corrupted and the system has been compromised. Attacks on the biometric system are not
new, it is popular culture to circumvent security systems and biometric systems are not
immune to this. Several online resources are available that describe such attacks on the
data collection module and many movies and television programs highlight attacks on
such systems. One such attack at this data collection module was outlined in the work of
Matsumoto, T., et al. in 2002 using "Gummy Fingers". The biometric research
community as well as industry has focused on research on preventing such attacks by
using the concept of “live-ness" detection techniques. Today, the newer sensors are
improving their resilience against a spoofing attack at this module. Previously, an acetate
spoofing attack where an image of a fingerprint placed on acetate was accepted as a
genuine live finger, were easy to do such attacks are providing increasingly difficult to do
and hence the more complicated approaches in vulnerability attacks being waged on the
7
sensor. And as such, techniques for live-ness detection within the fingerprint modality
focus on moisture content, temperature, electrical conductivity, and challenge response.
Figure 2.2 Various Biometrics.
A number of biometric identifiers are used in various applications. Each biometric
has its strengths and weaknesses and the choice typically depends on the application. No
single biometric is expected to effectively meet the requirements of all the applications.
The match between a biometric and an application is determined depending upon the
characteristics of the application and the properties of the biometric.
At its most simple level, biometric systems operate on a three-step process. First,
a sensor captures a physical or a behavioral sample. Second, the biometric system
develops a way to describe the observation mathematically that is obtaining a biometric
signature by separating the noise and other irrelevant features from the captured sample.
Third, the computer system inputs the biometric signature into a comparison algorithm
and compares it to one or more biometric signatures previously stored in its database.
8
Figure 2.3 Steps in Biometric Systems.
Figure 2.4 Major steps involved in any Biometric identification.
9
2.2 MAJOR STEPS OF ANY BIOMETRIC SYSTEM:
Data collection
Signal processing
Decision
Transmission
Store
10
CHAPTER 3
FACE RECOGNITION USING ONE DIMENSIONAL
PRINCIPALCOMPONENT ANALYSIS
3.1 INTRODUCTION:
Face recognition is one of the popular biometric systems. It is very high level task
for which developing a computational model is difficult because faces are complex,
multidimensional and meaningful visual stimuli. Face recognition can be treated as a
two-dimensional recognition problem taking the advantage of the fact that faces are
normally upright and thus maybe described by a small set of two-dimensional
characteristic views. Although face recognition is a high level visual problem, there is
quite a bit of structure imposed on the task. The advantage of this structure for
recognition scheme is based on an information theory approach. In the language of
information theory, relevant information is to be extracted in a face image, encode it as
efficiently as possible, and compare one face encoding with a database of models
encoded similarly. A simple approach to extract the information contained in an image of
a face is to somehow capture the variation in a collection of face images and use this
information to encode and compare individual face images. A common problem in
statistical pattern recognition is that of feature selection or feature extraction. Feature
selection refers to a process whereby a data space is transformed into a feature space that,
in theory, has exactly the same dimension as the original data space. However, the
transformation is done in such a way that the data set may be represented by reduced
number of effective features yet retain most of the intrinsic information content of the
data. In other words, the data set undergoes a dimensionality reduction. Principal
component analysis (PCA) or Karhunen-Loeve transformation is popularly used
technique for prediction, redundancy removal, feature extraction and dimensionality
reduction in which the main distinguishable features of the individual faces are to be
11
extract when applied to face recognition. In mathematical terms, the principal
components of the distribution of faces or the Eigen vectors of the covariance matrix of
the set of face images are to be found. Face images are projected onto a feature space that
best encodes the variation among known face images. Principal Component Analysis
(PCA) is discussed in section 3.2. Algorithm for face recognition using one dimensional
PCA is presented in section 3.3.
3.2 PRINCIPAL COMPONENTS ANALYSIS:
Principal component analysis is an optimal feature extraction and universally used
dimensionality reduction technique based on extracting the desired number of principal
components of the multi-dimensional data. PCA finds an alternative set of parameters for
a set of raw data (or features) such that most of the variability in the data is compressed
down to the first few parameters. A face image is a two-dimensional N by N array of
intensity values or a vector of dimension N2. A typical image of size 256 by 256
describes a vector of dimension 65,536, or equivalently a point in 65,536-dimensional
space. An ensemble of images maps to a collection of points in this huge space. Images
of faces, being similar in overall configuration will not be randomly distributed in this
huge image space and thus can be described by a relatively low dimensional subspace.
The face images in the training set are normalized by subtracting the average face, m of
the set. In one dimensional PCA, two dimensional face image matrices must be
previously transformed into one dimensional image vectors. The main idea of the
principal component analysis is to find the vectors which best account for the distribution
of face images within the entire image space. These vectors define the subspace of face
images that call “face space”. Each vector is of length N2, describes an N by N image,
and is a linear combination of the original face images. Because these vectors are the
Eigen vectors of the covariance matrix corresponding to the original face images and
because they are face like in appearance, they are referred as eigenfaces. Each face image
in the training set can be represented exactly in terms of linear combination of the
eigenfaces. The number of possible eigenfaces is equal to the number of face images in
the training set. However, the faces can also be approximated using only the best
12
eigenfaces, those accounts for the most variance with in the set of face images. The
primary reason for using fewer eigenfaces is computational efficiency. The best k
Eigenvectors or eigenfaces span a k-dimensional subspace of all possible images.
The feature matrix of a particular image gives the information about the
composition of the eigenfaces that belongs to the Eigen space. PCA is to find out the
projection matrix through which the features of the given data set are obtained by
projecting them onto this projection matrix.
Let A denote N-dimensional random vector and it has zero mean
E [A] = 0 …………………………………………………………………………....... (3.1)
Where, E is the statistical expectation operator. If A has a nonzero mean, subtract
the mean from it. Let X denote a unit vector, also has dimension n, onto which the vector
A is to be projected. The projection is defined by the inner product of the vectors A and
X, as shown by
Y=ATX=XTA …………………………………………………………...……………. (3.2)
Subject to the constraint,
ǁXǁ = (XTX) 1/2 = 1 …………………………………………………...………………. (3.3)
The projection Y is a random variable with the mean and variance related to the
statistics of the vector A. Under the assumption that the random vector A has zero mean,
it follows that the mean value of the projection Y is zero too.
E[Y] = XT .E[A] = 0 ……………………………………………………….………… (3.4)
The variance of Y is therefore the same as its mean-square value, and so
σ2 = E [Y2]
= E [(XTA) (ATX)]
= XT E [[AAT].X
13
= XT CX ……………………………………………………………………………… (3.5)
The N-by-N matrix C is the correlation matrix of the random vector A, formally
defined as the expectation of the outer product of the vector A with itself, as shown by
C = E [AAT] …………………………………………………………………….…… (3.6)
The correlation matrix C is symmetric CT= C and it follows that if a and b are any
N-by-1 vectors then
aTCb = bTCa …………………………………………………………………………. (3.7)
The variance σ2 of the projection Y is a function of the unit vector and is given as
Ψ(X) = σ2.
= XT.C.X …………………………………………………………………….… (3.8)
The maximum variation between the features can be obtained only when the
projection matrix is having its vectors uncorrelated. The issue to be considered is that of
finding those unit vectors X along which the features obtained will have maximum
variation in between them. The solution to this problem lies in the Eigen structure of the
correlation matrix C. The Eigen vectors obtained for a correlation matrix will be
orthogonal to each other i.e. maximally uncorrelated. This is discussed as follows
CX = λX …………………………………………………………………….……….. (3.9)
This is the equation that governs the unit vectors X for which the equation has
maximum variance. The problem has nontrivial solutions (i.e., X! = 0) only for special
values of λ that are called the eigenvalues of the correlation matrix C. The associated
values of X are called eigenvectors. A correlation matrix is characterized by real,
nonnegative eigenvalues. The associated eigenvectors are unique, assuming that the
eigenvalues are distinct. Let the eigenvalues of the n-by-n matrix C be denoted by λ1 ,
λ2 ,L, λN and the associated eigenvectors be denoted by X1 , X2 ,L, XN respectively. Hence
CXj = λjXj where j = 1, 2... N …………………………………………………… (3.10)
14
Let the corresponding eigenvalues be arranged in decreasing order:
λ1> λ2, > L >λj>λN…………………………………………………………….…….. (3.11)
So that λ1 = λmax. Let the associated eigenvectors corresponding to Eigen values in
descending order are represented as X1, X2, L, XN be used to construct an N-by-N matrix
X = [X1, X2, L, XN] …………………………………………………………...……. (3.12)
The set of N equations are combined into a single equation to represent as
CX = XΛ ………………………………………………………………………........ (3.13)
Where, Λ is a diagonal matrix defined by the eigenvalues of the covariance matrix
and is given as
Λ = diag[λ1, λ2 ,L, λN] ………………………………………………………...…….(3.14)
The matrix X is an orthogonal (unitary) matrix in the sense that its column vectors
(i.e., the eigenvectors of C) satisfy the conditions of orthonormality.
XiTXj= 1, j = i
= 0, j ≠ i
= X, X = i
The inverse of matrix X is the same as its transpose, XiT = X-1 and the orthogonal
similarity transformation is XT CX= Λ, and is elaborated as follows
XiTCXk= λj, k = j
= 0, k! = j
The orthogonal similarity (unitary) transformation transforms the correlation
matrix C into a diagonal matrix of eigenvalues. The correlation matrix C may itself be
expressed in terms of its eigenvalues and eigenvectors and is referred as the spectral
theorem as follows The outer product Xi is of rank 1 for all i. Principal component
analysis and Eigen decomposition of matrix C are basically one and the same, just
15
viewing the problem in different ways. The eigenvectors corresponding to nonzero
eigenvalues of the covariance matrix produce an orthonormal basis for the subspace
within which most image data can be represented with a small amount of error. The
eigenvectors are sorted from high to low according to their corresponding eigenvalues.
The eigenvector associated with the largest Eigen value is one that reflects the greatest
variance in the image. The smallest Eigen value is associated with the eigenvector that
finds the least variance. The k significant Eigen vectors of the projection matrix are
chosen as those with the largest associated Eigen values. The number of Eigen faces to be
used is chosen heuristically based on the Eigen values. A face image in the training set is
projected by simple multiple operation with the projection matrix X which is of size ------
to obtain the feature vector y of size k, and it is applied to all the face images in the
training set to form Y = [Y1 ,Y2 ,L,YM]
Where, M is the number of face images in the training set. A new face image Q,
which is transformed into a vector is projected onto face space by a simple operation Y q=
XT (Q − m). The new face image is recognized as a known face image from the training
set by considering the minimum Euclidean distance between the unknown face feature
vector and trained face feature vector.
3.3 ALGORITHM:
The steps for face recognition using one dimensional principal component analysis
are as follows:
Step 1: Obtain face images I1, I2, L, IM and represent every image as one dimensional
vector. Every image of same size NxN must be resized to (N2x1) in the database.
16
Step 2: Suppose Г is an N2x1 vector, corresponding to an NxN face image I. The idea is
to represent Г (Φ = Г - mean face) into a low dimensional space.
Step 3: Computation of the average face vector Ψ:
Step 4: Subtract the mean face:
Step 5: Compute the covariance matrix C:
Step 6: Compute the Eigen vectors ui of AAT :
The matrix AAT is very large and computing it is not practical.
Step 6.1: Compute the matrix ATA (MxM matrix).
Step 6.2: Compute the Eigen vectors vi of ATA.
17
ATAvi = uivi
The relationship between ui and vi:
ATAvi = μivi => AATAvi = μiAvi =>
CATAvi = μiAvi => Cvi = μivi where ui = Avi
Thus, AAT and ATA have the same eigenvalues and eigenvectors are related as
follows:
ui = Avi
Note 1: AAT can have upto N2 eigenvalues and eigenvetors.
Note 2: ATA can have upto M eigenvalues and eigenvectors.
Note 3: The M eigenvalues of ATA (along with their corresponding
eigenvectors) correspond to the M largest eigenvalues of AAT:ui = Avi (along
with their corresponding eigenvectors).
Step 6.3: Compute the M best eigenvectors of AAT: ui = Avi.
(Important: normalize ui such that ǀǀuiǀǀ = 1)
Step 7: Keep only K eigenvectors (corresponding to the k largest eigenvalues). Each face
(minus the mean) Φi in the training set can be represented as a linear combination of the
K best eigenvectors:
(We call the uj’s eigenfaces)
18
Each normalized training face Φi is represented in this basis by a vector:
Unknown Image:
Given an unknown face image Г (centered and of the same size like the training
faces) follow these steps:
Step 1: Normalize Φ = Г – Ψ
Step 2: Projection on eigenspace:
Step 3: Represent Φ as:
Step 4: Find er = minlǀǀΩ- Ω lǀǀ
Step 5: If er < Tr, then Г is recognized as face l from the training set. The distance er is
called distance within the face space (difs).
Comment: we can use the common Euclidean distance to compute er, however, it has
been reported that the Mahalanobis distance performs better:
(Variations along all axes are treated as equally significant)
19
CHAPTER 4
PALMPRINT IDENTIFICATION USING
SEQUENTIAL MODIFIED HAAR WAVELET
ENERGY
4.1 INTRODUCTION:
Palmprint identification had been introduced a decade ago. It is defined as the
measurement of palmprint features to recognize the identity of a person. Palmprint is
universal because everyone has palmprint. It is easy to capture using digital cameras.
Palmprint does not change much across time. Palmprint has advantages compared to
other biometric systems. Iris scanning biometric system can provides a high accuracy
biometric system but the cost of iris scanning devices is high. Palmprint biometric system
can captures hand images using a conventional digital camera. Palmprint biometric
system is user-friendly because users can grant the access frequently by only presenting
their hand in front of the camera. Palmprint biometric system can achieve higher
accuracy than hand geometry biometric system, because the geometry or shape of the
hand for most of the adults is relatively similar. Palmprint contains geometry features,
line features, point features, statistical features & texture features. The palmprint
geometry features are insufficient to identify individuals. This is because the palmprint
geometry features such as palm size, palm width and others for adults are relatively
similar. The palmprint line features include principal lines, wrinkles and ridges. Ridges
are the fine lines of the palmprint. It requires high-resolution image or inked palmprint
image to obtain its features. Wrinkles are the coarse line of the palmprint while the
principle lines are major line that is available on most of the palm(headline, lifeline and
heartline).The separation of wrinkles & principle line are difficult since some wrinkles
might be as thick as principle lines. Palmprint point features use the minutiae points or
delta points to identify an individual. Point features require high resolution hand image
20
because low-resolution hand image does not have a clear point’s location. Palmprint
statistical features represent the palmprint image in a statistical form to identify and
individual. Some of statistical methods available are Principle Component Analysis
(PCA) and Independent Component Analysis (ICA). Palmprint texture features are
usually extracted using transform-based method such as Fourier Transform and Discrete
Cosine Transform. Besides that, Wavelet transform is also used to extract the texture
features of the palmprint. In this work, a sequential modified Haar wavelet is proposed to
find the Modified Haar Energy (MHE) feature. The sequential modified Haar wavelet can
maps the integer- valued signals onto integer-valued signals without abandoning the
property of perfect reconstruction shows the proposed palmprint identification using
sequential "S-transform" modified Haar transform. In this work, ten images from the
right hand of 100 individuals are acquired using a digital camera. The hand image is
segmented and the key points are located. By referring to the key points, the hand image
is aligned and the central of the palm is cropped. The palmprint image is enhanced and
resized. The energy features of the palmprint are extracted using sequential modified
Haar wavelet. The Modified Haar Energy (MHE) is represented using feature vector and
compared using Euclidean distance with the feature vectors stored in the database.
4.2 BLOCK DIAGRAM:
Figure 4.1 Block Diagram of Palmprint Identification Using Sequential Modified Haar
Wavelet.
21
4.3 IMAGE ACQUISITION:
Ten right hand images from 100 different individuals are captured using Canon
PowerShot A420 digital camera. During the image acquisition, no pegs alignment is
made. The hand image is taken without any special lighting condition. Dark intensity
backgrounds, such as black and dark blue are used in this work. The usage of the low
intensity background is to ease the hand image segmentation. Users are required to spread
their fingers apart and lean their hand against the background during the image
acquisition process. Figure 4.2 shows the hand image acquired. The hand image is saved
using JPEG format in 1024 x 768 pixels. The hand image has the resolution of 180 dpi
(dot per inches).
Figure 4.2 Hand Image.
4.4 IMAGE PREPROCESSING:
The hand images are preprocessed to obtain the region-of- interest (ROI) or also
known as palmprint image. In image preprocessing stage, hand image segmentation, key
point’s determination and palmprint extraction are done.
The skin has higher red intensity color while the background (black or dark blue
color) has lower red intensity color. Thus, by referring to the red component of the hand
image, the hand image is segmented using Otsu's Method.
22
Figure 4.3 Binary Hand Image
Otsu Segmentation:
In 1979, Japanese scholar Otsu presented a global thresholding algorithm.
Suppose that an image compose of target and background, which have different gray-
level, and the target jsat higher gray level. The gray-level based on the statistical
histogram ranges &om 0 to L. Between 0 and L , threshold K is chosen to segment the
image into two classes: the background whose gray-level is from 0 to K and the target
whose gray-level is komK+i to L. If a certain threshold K can make the value of
interclass variance the highest among all the possible 'values.
The threshold K is the one with which target and background can be accurately
divided. The K is the final are the variance values of target, background and image
threshold we are looking for. The equations involved are as follows:
23
where is the amount of pixel at gray-level I, N is the total pixel amount in the
image; p ( i ) is probability, , and , are the probabilities of target and background
respectively are the mean gray-level values of target ,background and image
respectively; are the variance values of target, background and image
respectively; , and , are the interclass and intraclass variance value respectively;
is the function for threshold selection.
24
After hand image segmentation, the boundary pixels for the binary hand image
are tracked using boundary-tracking algorithm. Figure 4.4 shows the boundary pixels of
the hand image.
The central of the wrist, Pw, is the middle pixels of the boundary pixels located at
the edge of the hand image. Figure 4.4 marks the location of Pw. The distance between
Pw and boundary pixels, Distp, in clockwise direction is calculated. Figure 4.5 shows the
graph of Distp plotted against the index of the boundary pixels.
Figure 4.4 Boundary Pixel of Palm.
From Figure 4.5, Key Point 1, KI, is the first local minima in the graph while Key
Point 2, K2, is the third local minima in the graph. The coordinate of the key points in the
hand image is determined using the index of the boundary pixels for KI and K2. KI is the
bottom pixels at the gap between little and ring fingers while K2 is the bottom pixels at
the gap between the middle finger and index fingers as in Figure 4.6.
Figure 4.5 Graph of Dist, Plotted against boundary pixel index.
25
The angle of rotation for the hand image is calculated according to the Figure 4.6.
Let LineK is the line between KI and K2. LineK is rotated 0 degrees clockwise at origin
KI so that LineK is parallel with the x-axis for all the hand images. The distance between
KI and K2 is calculated as (1).
Figure 4.6 Location of k1, K2 with rotation angle
From the experimental results, the ROI mask is selected 0.2 x DistK below the
LineK while the length of the square ROI is 1.4 x DistK. The selection of ROI mask with
0.2 x DistK below the LineKallow concentration of the ROI mask at the central of the
palm. The variable ROI mask size eases the extraction of the palmprint in different size.
Figure 7 shows the length of the lineK (DistK), length of the square ROI (1.4 x DistK)
and distance between lineK and square ROI (0.2 x DistK).
Figure 4.7 Distk, Length of the Square ROI and Distance between Linekand Square ROI
26
A smaller hand image is obtained by cropping the hand image according to the
minimum and maximum of the x and y coordinates of the square ROI. The cropped hand
image is rotated by 0 degrees clockwise. The diagonal pixels of the rotated image are
determined. By referring to the index of the first non-black pixel, m, the rotated hand
image is cropped m pixels from its sides (top, bottom, left and right). Figure 4.8 shows
the extracted palmprint image using the proposed method.
The palmprint image is in RGB format. It is converted into grayscale intensity
format, ImG before image enhancement. Figure 4.9 shows the palmprint image in
grayscale intensity format.
Figure.4.8.Palmprint Image Figure 4.9 Palmprint Image in Gray scale Format
The grayscale image is enhanced using two different methods. In the first method,
the grayscale hand image's histogram is adjusted to the full 256-color bins to form
Adjusted Palmprint Image, ImA. In Histogram Equalized Palmprint Image, ImH, the
histogram of the grayscale hand image is equalized into the full 256 bins. To test for the
effectiveness of the enhanced image, the grayscale image is also used in further
processing. Figure 4.10 shows the Adjusted Palmprint Image, ImA and Histogram
Equalized Palmprint Image, ImH from the same individual.
Fig 4.10 Adjusted Palmprint Image ImA and Histogram Equalized palmprintImage ImH
27
4.5 SEQUENTIAL HAAR TRANSFORM:
Wavelet transform is a multi-resolution tool that can be used to analyze the
palmprint image in different decomposition levels. Level one palmprint decomposition is
used to extract the fine lines of the palmprint. The higher the decomposition level, the
coarser the extracted palm lines, such as wrinkles and principal lines. Haar wavelet is
used to find the discontinuity between two pixels. It is not calculation expensive
compared to other types of wavelet such as Daubechies wavelets, Mexican hat wavelets
and Morlets wavelets.
Haar wavelet decomposes the palmprint image into approximation (A), horizontal
detail (H), vertical detail (V) and diagonal detail (D). Figure 4.11 shows one of the
methods used to obtain the 2-dimensional Haar coefficients.
Figure 4.11 Haar Wavelet Decomposition.
Where, X or Y axis represents the direction of addition and subtraction. Even is
the even pixels number (2, 4, 6 ... 256) and odd is the odd pixels number (1, 3, 5 ... 255).
For wavelet decomposition in multiple levels, the approximation for the previous level,
Ai-1 are used to further decompose into approximation, horizontal detail, vertical detail
and diagonal detail for the next decomposition level (Ai, Hi, Vi, and Di).
The Haar wavelet coefficients are represented using decimal numbers. It required
eight bytes for storing each of the Haar coefficients. The division of the subtraction
results in y-axis operation will also reduce the difference between two adjacent pixels.
28
Due to these reasons, sequential Haar wavelet that maps an integer-valued pixel onto
another integer-valued pixel is suggested. Sequential Haar coefficient requires only two
bytes to store each of the extracted coefficients. The cancellation of the division in
subtraction results avoids the usage of decimal numbers while preserving the difference
between two adjacent pixels. Figure 4.12 shows the decomposition using sequential Haar
wavelet.
Figure 4.12 Sequential Haar Transform
Where, X or Y-axis represents the direction of addition and subtraction. 1/2 is the
rounding of the "divide by 2" results towards the negative infinity.
From Figure 4.12 it is shown that the addition results are divided by 2 and
rounded to negative infinity. It is different than the original Haar wavelet decomposition
in Figure 4.11 that divides the results by 2 after the operation in Y-axis. No operation was
done to the subtraction results in modified Haar wavelet decomposition.
The grayscale image and enhanced images (ImA and ImH) is decomposed using
sequential modified Haar wavelets into six-decomposition levels. Figure 4.13 shows the
coefficients for the six levels of the decomposition using sequential modified Haar
wavelet in image representation. Figure 4.14 shows the location of horizontal details,
vertical details and diagonal details for every decomposition levels in Figure 4.13.
29
Figure 4.13 Image Representation for Coefficients in Six Level of Decomposition using
Sequential Modified Haar Wavelet.
In this work, only the details coefficients, C, of the sequential modified Haar
wavelet is used to find the energy feature. This is because the approximation coefficients
contain mean component of the palmprint image.
Figure 4.14 Locations of Details in Every Decomposition Levels.
For every image of the detail coefficients, the image is further divided into 4 x 4
blocks. The Modified Haar Energy (MHE) for each of the block is calculated using (2).
Where i is the level of decomposition, j is Horizontal, Vertical or diagonal details,
k is the block no of 1 to 16, P *Q is the size of the block.
30
The MHE energy feature for every detail coefficients are arranged as (3).
Let DetailHVD, i represents the combination of horizontal, vertical and diagonal
detail coefficients in i decomposition level, as in (4).
The DetailHVD in different decomposition levels are normalized using (5).
Several types of feature representation are tested in this work. They are:
FV1 - Level one normalized DetailHVD, D1.
FV2 - Level two normalized DetailHVD, D2.
FV3 - Level three normalized DetailHVD, D3.
FV4 - Level four normalized DetailHVD, D4.
FV5 - Level five normalized DetailHVD, D5.
FV6 - Level six normalized DetailHVD, D6.
FV7 - Combination of D1, and D2.
FV8 - Combination of Dl, D2 and D3.
FV9 - Combination of D1, D2, D3 and D4.
FV1O - Combination of DI, D2, D3, D4 and D5.
FV11 - Combination of D1, D2, D3, D4, D5 and D6.
FV1 to FV6 is the feature representation for different decomposition level while
FV7 to FV11 is the feature representation for different composition levels. Figure 4.15
31
shows intra-class feature vectors (Feature Vector obtained from the same individual in
different time interval) while Figure 4.16 shows the inter-class feature vectors (feature
vectors obtained from different individuals).
Figure 4.15 Intra-class Feature Vectors Figure 4.16 Inter-class Feature Vectors
From Figure 4.15 it is shown that the feature vector does not vary much in intra-
class. From Figure 4.16 the feature vector for Figure 4.15 is plotted in gray color while
the feature vector for another individual is plotted in black color. It is shown that the
feature vector for different individuals (inter-class) varied in amplitude and peaks.
In the combination of different decomposition levels (FV7 to FVII), the Euclidean
distance for all decomposition level is added up. Since some of the decomposition levels
might give a lower minimum total error rate (MTER) than others, a suitable weight can
be used to enhance the better discriminate decomposition levels. The Euclidean distance
and MTER will be explained in the next section (Section Feature Matching). The weight
of the decomposition level i is defined as
32
Where, MTERi is the Minimum Total Error Rate at i-th level decomposition, K is
the total number of decomposition levels. The feature representations for weighted
combination of different decomposition levels are:
FV12 - Weighted combination of D1 and D2.
FV13 - Weighted combination of Dl, D2 and D3.
FV14 - Weighted combination of D1, D2, D3 and D4.
FV15 - Weighted combination of D1, D2, D3, D4 and D5.
FV16 - Weighted combination of D1, D2, D3, D4, D5 and D6.
4.6 FEATURE MATCHING:
Euclidean distance measures the similarity between two different feature vectors
using (7).
Where J is the length of the feature vector, FVi is the feature vector for individual i.
Each of the feature vectors is matched using Euclidean Distance with the
remaining 999 feature vectors in the database. The genuine ED distribution graph and
imposter ED distribution graph are normalized because for every feature vector, there
will be nine genuine matching and 990 imposter matching. If both of the graphs are
plotted directly, the imposter ED distribution graph with 990000 values will cover the
genuine ED distribution graph with only 9000 value. Thus, the FAR and FRR is
normalized to 0.5 each, so that the total is equal to one. Theoretically, the system will get
a zero when there are no false acceptance and no false rejection. Since there is no perfect
system, Figure 4.17 shows the graph of false acceptance rate (FAR) and false rejection
rate (FRR) versus the threshold index.
33
Figure 4.17 Graph of FAR and FRR versus the Threshold Index
The interceptions between these two graphs yield a minimum total error rate
(MTER). MTER is defined as the minimum error a system can achieve regardless the
system is tends for civil application or forensic application. By referring to the MTER, a
suitable threshold is selected to differentiate the genuine user and the imposter. Besides
that, the MTER for the FV1 to FV6 are also used to find the weight in FV12 to FV16.
34
CHAPTER 5
SOFTWARE DESCRIPTION
5.1 INTRODUCTION:
5.1.1 MATLAB:
MATLAB® is a high-performance language for technical computing. It integrates
computation, visualization, and programming in an easy-to-use environment where
problems and solutions are expressed in familiar mathematical notation. Typical uses
include:
Math and computation
Algorithm development
Modeling, simulation, and prototyping
Data analysis, exploration, and visualization
Scientific and engineering graphics
Application development, including graphical user interface building.
MATLAB is an interactive system whose basic data element is an array that does not
require dimensioning. This allows you to solve many technical computing problems,
especially those with matrix and vector formulations, in a fraction of the time it would
take to write a program in a scalar non interactive language such as C or FORTRAN.
The name MATLAB stands for matrix laboratory. MATLAB was originally written
to provide easy access to matrix software developed by the LINPACK and EISPACK
projects. Today, MATLAB uses software developed by the LAPACK and ARPACK
projects, which together represent the state-of-the-art in software for matrix computation.
35
MATLAB has evolved over a period of years with input from many users. In
university environments, it is the standard instructional tool for introductory and
advanced courses in mathematics, engineering, and science. In industry, MATLAB is the
tool of choice for high-productivity research, development, and analysis.
MATLAB features a family of application-specific solutions called toolboxes. Very
important to most users of MATLAB, toolboxes allow you to learn and apply specialized
technology. Toolboxes are comprehensive collections of MATLAB functions (M-files)
that extend the MATLAB environment to solve particular classes of problems. Areas in
which toolboxes are available include signal processing, control systems, neural
networks, fuzzy logic, wavelets, simulation, and many others.
5.1.2 The MATLAB System:
The MATLAB system consists of five main parts:
1. Development Environment:
This is the set of tools and facilities that help you use MATLAB functions
and files. Many of these tools are graphical user interfaces. It includes the
MATLAB desktop and Command Window, a command history, and browsers for
viewing help, the workspace, files, and the search path.
2. The MATLAB Mathematical Function Library:
This is a vast collection of computational algorithms ranging from
elementary functions like sum, sine, cosine, and complex arithmetic, to more
sophisticated functions like matrix inverse, matrix eigenvalues, Bessel functions,
and fast Fourier transforms.
3. The MATLAB Language:
This is a high-level matrix/array language with control flow statements,
functions, data structures, input/output, and object-oriented programming
features. It allows both "programming in the small" to rapidly create quick and
36
dirty throw-away programs, and "programming in the large" to create complete
large and complex application programs.
4. Handle Graphics®:
This is the MATLAB graphics system. It includes high-level commands
for two-dimensional and three-dimensional data visualization, image processing,
animation and presentation graphics. It also includes low-level commands that
allow you to fully customize the appearance of graphics as well as to build
complete graphical user interfaces on your MATLAB applications.
5. The MATLAB Application Program Interface (API):
This is a library that allows you to write C and FORTRAN programs that
interact with MATLAB. It include facilities for calling routines from MATLAB
(dynamic linking), calling MATLAB as a computational engine, and for reading
and writing MAT-files.
5.2 DEVELOPMENT ENVIRONMENT:
Introduction:
This chapter provides a brief introduction to starting and quitting
MATLAB, and the tools and functions that help you to work with MATLAB
variables and files. For more information about the topics covered here, see the
corresponding topics under Development Environment in the MATLAB
documentation, which is available online as well as in print.
Starting and Quitting MATLAB:
Starting MATLAB:
On a Microsoft Windows platform, to start MATLAB,
double-click the MATLAB shortcut icon on your Windows
desktop.
37
On a UNIX platform, to start MATLAB, type Matlab at the
operating system prompt.
After starting MATLAB, the MATLAB desktop opens -
see MATLAB Desktop.
You can change the directory in which MATLAB starts,
define startup options including running a script upon startup, and
reduce startup time in some situations.
Quitting MATLAB:
To end your MATLAB session, select Exit MATLAB from
the File menu in the desktop, or type quit in the Command
Window. To execute specified functions each time MATLAB
quits, such as saving the workspace, you can create and run a
finish.m script.
MATLAB Desktop:
When you start MATLAB, the MATLAB desktop appears, containing
tools (graphical user interfaces) for managing files, variables, and applications
associated with MATLAB.
The first time MATLAB starts, the desktop appears as shown in the
following illustration, although your Launch Pad may contain different entries.
You can change the way your desktop looks by opening, closing, moving
and resizing the tools in it. You can also move tools outside of the desktop or
return them back inside the desktop (docking). All the desktop tools provide
common features such as context menus and keyboard shortcuts.
You can specify certain characteristics for the desktop tools by selecting
Preferences from the File menu. For example, you can specify the font
characteristics for Command Window text. For more information, click the Help
button in the Preferences dialog box.
38
Desktop Tools:
This section provides an introduction to MATLAB's desktop tools. You can also
use MATLAB functions to perform most of the features found in the desktop tools.
The tools are:
1. Current Directory Browser.
2. Workspace Browser.
3. Array Editor.
4. Editor/Debugger.
5. Command Window.
6. Command History.
7. Launch Pad.
8. Help Browser
Command Window:
Use the Command Window to enter variables and run functions and M-
files.
Command History:
Lines you enter in the Command Window are logged in the Command
History window. In the Command History, you can view previously used
functions, and copy and execute selected lines. To save the input and output from
a MATLAB session to a file, use the diary function.
Running External Programs:
You can run external programs from the MATLAB Command Window.
The exclamation point character! is a shell escape and indicates that the rest of the
input line is a command to the operating system. This is useful for invoking
utilities or running other programs without quitting MATLAB. On Linux, for
example, emacsmagik.m invokes an editor called emacs for a file named
39
magik.m. When you quit the external program, the operating system returns
control to MATLAB.
Launch Pad:
MATLAB's Launch Pad provides easy access to tools, demos, and
documentation.
Help Browser:
Use the Help browser to search and view documentation for all your Math
Works products. The Help browser is a Web browser integrated into the
MATLAB desktop that displays HTML documents.
To open the Help browser, click the help button in the toolbar, or type help
browser in the Command Window. The Help browser consists of two panes, the
Help Navigator, which you use to find information, and the display pane, where
you view the information.
Help Navigator:
Use to Help Navigator to find information. It includes:
Product filter:
Set the filter to show documentation only for the products you specify.
Contents tab:
View the titles and tables of contents of documentation for your products.
Index tab:
Find specific index entries (selected keywords) in the MathWorks
documentation for your products.
40
Search tab:
Look for a specific phrase in the documentation. To get help for a specific
function, set the Search type to Function Name.
Favorites tab:
View a list of documents you previously designated as favorites.
Display Pane:
After finding documentation using the Help Navigator, view it in the
display pane. While viewing the documentation, you can:
Browse to other pages: Use the arrows at the tops and bottoms of the
pages, or use the back and forward buttons in the toolbar.
Bookmark pages: Click the Add to Favorites button in the toolbar.
Print pages: Click the print button in the toolbar.
Find a term in the page: Type a term in the Find in page field in the
toolbar and click Go.
Other features available in the display pane are: copying information,
evaluating a selection, and viewing Web pages.
Current Directory Browser:
MATLAB file operations use the current directory and the search path as
reference points. Any file you want to run must either be in the current directory
or on the search path.
Search Path:
To determine how to execute functions you call, MATLAB uses a search
path to find M-files and other MATLAB-related files, which are organized in
directories on your file system. Any file you want to run in MATLAB must reside
in the current directory or in a directory that is on the search path. By default, the
files supplied with MATLAB and MathWorks toolboxes are included in the
search path.
41
Workspace Browser:
The MATLAB workspace consists of the set of variables (named arrays)
built up during a MATLAB session and stored in memory. You add variables to
the workspace by using functions, running M-files, and loading saved
workspaces.
To view the workspace and information about each variable, use the
Workspace browser, or use the functions who and who’s.
To delete variables from the workspace, select the variable and select
Delete from the Edit menu. Alternatively, use the clear function.
The workspace is not maintained after you end the MATLAB session. To
save the workspace to a file that can be read during a later MATLAB session,
select Save Workspace As from the File menu, or use the save function. This
saves the workspace to a binary file called a MAT-file, which has a .mat
extension. There are options for saving to different formats. To read in a MAT-
file, select Import Data from the File menu, or use the load function.
Array Editor:
Double-click on a variable in the Workspace browser to see it in the Array
Editor. Use the Array Editor to view and edit a visual representation of one- or
two-dimensional numeric arrays, strings, and cell arrays of strings that are in the
workspace.
Editor/Debugger:
Use the Editor/Debugger to create and debug M-files, which are programs
you write to run MATLAB functions. The Editor/Debugger provides a graphical
user interface for basic text editing, as well as for M-file debugging.
You can use any text editor to create M-files, such as Emacs, and can use
preferences (accessible from the desktop File menu) to specify that editor as the
default. If you use another editor, you can still use the MATLAB Editor/Debugger
42
for debugging, or you can use debugging functions, such as dbstop, which sets a
breakpoint.
If you just need to view the contents of an M-file, you can display it in the
Command Window by using the type function.
5.3 MANIPULATING MATRICES:
5.3.1 Entering Matrices:
The best way for you to get started with MATLAB is to learn how to handle
matrices. Start MATLAB and follow along with each example.
You can enter matrices into MATLAB in several different ways:
Enter an explicit list of elements.
Load matrices from external data files.
Generate matrices using built-in functions.
Create matrices with your own functions in M-files.
Start by entering Dürer's matrix as a list of its elements. You have only to follow a
few basic conventions:
Separate the elements of a row with blanks or commas.
Use a semicolon (;) to indicate the end of each row.
Surround the entire list of elements with square brackets [ ].
To enter Durer's matrix, simply type in the Command Window
A = [16 3 2 13; 5 10 11 8; 9 6 7 12; 4 15 14 1]
MATLAB displays the matrix you just entered.
43
A =
16 3 2 13
5 10 11 8
9 6 7 12
4 15 14 1
This exactly matches the numbers in the engraving. Once you have entered the
matrix, it is automatically remembered in the MATLAB workspace. You can refer to it
simply as A.
5.3.2 Expressions:
Like most other programming languages, MATLAB provides mathematical
expressions, but unlike most programming languages, these expressions involve entire
matrices. The building blocks of expressions are:
Variables
Numbers
Operators
Function
5.3.3 Variables:
MATLAB does not require any type declarations or dimension statements. When
MATLAB encounters a new variable name, it automatically creates the variable and
44
allocates the appropriate amount of storage. If the variable already exists, MATLAB
changes its contents and, if necessary, allocates new storage. For example,
num_students = 25
Creates a 1-by-1 matrix named num_students and stores the value 25 in its single
element.
Variable names consist of a letter, followed by any number of letters, digits, or
underscores. MATLAB uses only the first 31 characters of a variable name. MATLAB is
case sensitive; it distinguishes between uppercase and lowercase letters. A and a are not
the same variable. To view the matrix assigned to any variable, simply enter the variable
name.
5.3.4 Numbers:
MATLAB uses conventional decimal notation, with an optional decimal point and
leading plus or minus sign, for numbers. Scientific notation uses the letter e to specify a
power-of-ten scale factor. Imaginary numbers use either i or j as a suffix. Some examples
of legal numbers are
3 -99 0.0001
9.6397238 1.60210e-20 6.02252e23
1i -3.14159j 3e5i
All numbers are stored internally using the long format specified by the IEEE
floating-point standard. Floating-point numbers have a finite precision of roughly 16
significant decimal digits and a finite range of roughly 10-308 to 10+308.
45
5.3.5 Operators:
Expressions use familiar arithmetic operators and precedence rules.
+ Addition
- Subtraction
* Multiplication
/ Division
\ Left division (described in "Matrices and Linear Algebra" in
Using MATLAB)
^ Power
' Complex conjugate transpose
( ) Specify evaluation order
Table 5.1 Arithmetic Operators and Precedence Rules.
5.3.6 Functions:
MATLAB provides a large number of standard elementary mathematical
functions, including abs, sqrt, exp, and sin. Taking the square root or logarithm of a
negative number is not an error; the appropriate complex result is produced
automatically. MATLAB also provides many more advanced mathematical functions,
including Bessel and gamma functions. Most of these functions accept complex
arguments. For a list of the elementary mathematical functions, type helpelfun. For a list
of more advanced mathematical and matrix functions, type helpspecfun or helpelmat.
Some of the functions, like sqrt and sin, are built-in. They are part of the
MATLAB core so they are very efficient, but the computational details are not readily
accessible. Other functions, like gamma and sinh, are implemented in M-files. You can
46
see the code and even modify it if you want. Several special functions provide values of
useful constants.
Pi 3.14159265...
I Imaginary unit, √-1
I Same as i
Eps Floating-point relative precision, 2-52
Realmin Smallest floating-point number, 2-1022
Realmax Largest floating-point number, (2-ε)21023
Inf Infinity
NaN Not-a-number
Table 5.2 Special Functions.
5.4 GRAPHICAL USER INTERFACE (GUI):
A graphical user interface (GUI) is a user interface built with graphical objects, such
as buttons, text fields, sliders, and menus. In general, these objects already have meanings
to most computer users. For example, when you move a slider, a value change; when you
press an OK button, your settings are applied and the dialog box is dismissed. Of course,
to leverage this built-in familiarity, you must be consistent in how you use the various
GUI-building components.
Applications that provide GUIs are generally easier to learn and use since the
person using the application does not need to know what commands are available or how
they work. The action that results from a particular user action can be made clear by the
design of the interface.
47
The sections that follow describe how to create GUIs with MATLAB. This
includes laying out the components, programming them to do specific things in response
to user actions, and saving and launching the GUI; in other words, the mechanics of
creating GUIs. This documentation does not attempt to cover the "art" of good user
interface design, which is an entire field unto itself. Topics covered in this section
include:
5.4.1 Creating GUIs with GUIDE:
MATLAB implements GUIs as Figure windows containing various styles of
uicontrol objects. You must program each object to perform the intended action when
activated by the user of the GUI. In addition, you must be able to save and launch your
GUI. All of these tasks are simplified by GUIDE, MATLAB's graphical user interface
development environment.
5.4.2 GUI Development Environment:
The process of implementing a GUI involves two basic tasks:
Laying out the GUI components
Programming the GUI components
GUIDE primarily is a set of layout tools. However, GUIDE also generates an M-file
that contains code to handle the initialization and launching of the GUI. This M-file
provides a framework for the implementation of the callbacks - the functions that execute
when users activate components in the GUI.
48
5.4.3 The Implementation of a GUI:
While it is possible to write an M-file that contains all the commands to lay out a
GUI, it is easier to use GUIDE to lay out the components interactively and to generate
two files that save and launch the GUI:
A FIGURE-file - contains a complete description of the GUI Figure and all of its
Children (uicontrols and axes), as well as the values of all object properties.
An M-file - contains the functions that launch and control the GUI and the
Callbacks, which are defined as subfunctions. This M-file is referred to as the
Application M-file in this documentation.
Note that the application M-file does not contain the code that lays out the
uicontrols; this information is saved in the FIGURE-file.
Figure 5.1 Diagram illustrating parts of a GUI implementation.
49
5.4.4 Features of the GUIDE-Generated Application M-File:
GUIDE simplifies the creation of GUI applications by automatically generating
an M-file framework directly from your layout. You can then use this framework to code
your application M-file. This approach provides a number of advantages:
The M-file contains code to implement a number of useful features (see
ConFigureuring Application Options for information on these features). The M-file
adopts an effective approach to managing object handles and executing callback routines
(see Creating and Storing the Object Handle Structure for more information). The M-files
provides a way to manage global data (see Managing GUI Data for more information).
The automatically inserted subfunction prototypes for callbacks ensure
compatibility with future releases. For more information, see Generating Callback
Function Prototypes for information on syntax and arguments.
You can elect to have GUIDE generate only the FIGURE-file and write the
application M-file yourself. Keep in mind that there are no uicontrol creation commands
in the application M-file; the layout information is contained in the FIGURE-file
generated by the Layout Editor.
5.4.5 Beginning the Implementation Process:
To begin implementing your GUI, proceed to the following sections:
Getting Started with GUIDE - the basics of using GUIDE:
Selecting GUIDE Application Options - set both FIGURE-file and M-file
options.
Using the Layout Editor - begin laying out the GUI.
Understanding the Application M-File - discussion of programming techniques
used in the application M-file.
Application Examples - a collection of examples that illustrate techniques which
are useful for implementing GUIs.
50
5.4.6 Command-Line Accessibility:
When MATLAB creates a graph, the Figure and axes are included in the list of
children of their respective parents and their handles are available through commands
such as findobj, set, and get. If you issue another plotting command, the output is directed
to the current Figure and axes.
GUIs are also created in Figure windows. Generally, you do not want GUI
Figures to be available as targets for graphics output, since issuing a plotting command
could direct the output to the GUI Figure, resulting in the graph appearing in the middle
of the GUI.
In contrast, if you create a GUI that contains an axes and you want commands
entered in the command window to display in this axes, you should enable command-line
access.
5.4.7 User Interface Controls:
The Layout Editor Component palette contains the user interface controls that you
can use in your GUI. These components are MATLAB uicontrol objects and are
programmable via their Callback properties. This section provides information on these
components.
Push Buttons
Sliders
Toggle Buttons
Frames
Radio Buttons
Listboxes
Checkboxes
51
Popup Menus
Edit Text
Axes
Static Text
Figures
5.5 PROGRAMMING WITH MATLAB:
Push Buttons:
Push buttons generate an action when pressed (e.g., an OK button may
close a dialog box and apply settings). When you click down on a push button, it
appears depressed; when you release the mouse, the button's appearance returns to
its nondepressed state; and its callback executes on the button up event.
Properties to Set:
String - set this property to the character string you want displayed on the push
button.
Tag - GUIDE uses the Tag property to name the callback subfunction in the
application M-file. Set Tag to a descriptive name (e.g., close_button) before
activating the GUI.
Programming the Callback:
When the user clicks on the push button, its callback executes. Push
buttons do not return a value or maintain a state.
52
The callback routine needs to query the toggle button to determine what
state it is in. MATLAB sets the Value property equal to the Max property when
the toggle button is depressed (Max is 1 by default) and equal to the Min property
when the toggle button is not depressed (Min is 0 by default).
Toggle Buttons:
Toggle buttons generate an action and indicate a binary state (e.g., on or
off). When you click on a toggle button, it appears depressed and remains
depressed when you release the mouse button, at which point the callback
executes. A subsequent mouse click returns the toggle button to the nondepressed
state and again executes its callback.
From the GUIDE Application M-File:
The following code illustrates how to program the callback in the GUIDE
application M-file.
functionvarargout = togglebutton1_Callback(h,eventdata,handles,varargin)
button_state = get (h,'Value');
ifbutton_state == get (h,'Max') % toggle button is pressed
elseifbutton_state == get (h,'Min') % toggle button is not pressed
end
Adding an Image to a Push Button or Toggle Button:
Assign the CData property an m-by-n-by-3 array of RGB values that
define a truecolor image. For example, the array a defines 16-by-128 truecolor
image using random values between 0 and 1 (generated by rand).
a (:,:,1) = rand(16,128);
a (:,:,2) = rand(16,128);
a (:,:,3) = rand(16,128);
set (h,'CData',a)
53
Radio Buttons:
Radio buttons are similar to checkboxes, but are intended to be mutually
exclusive within a group of related radio buttons (i.e., only one button is in a
selected state at any given time). To activate a radio button, click the mouse
button on the object. The display indicates the state of the button.
Implementing Mutually Exclusive Behavior:
Radio buttons have two states - selected and not selected. You can query
and set the state of a radio button through its Value property:
Value = Max, button is selected.
Value = Min, button is not selected.
To make radio buttons mutually exclusive within a group, the callback for
each radio button must set the Value property to 0 on all other radio buttons in the
group. MATLAB sets the Value property to 1 on the radio button clicked by the
user.
The following subfunction, when added to the application M-file, can be
called by each radio button callback. The argument is an array containing the
handles of all other radio buttons in the group that must be deselected.
functionmutual_exclude(off)
set (off,'Value',0)
Obtaining the Radio Button Handles:
The handles of the radio buttons are available from the handles structure,
which contains the handles of all components in the GUI. This structure is an
input argument to all radio button callbacks.
The following code shows the call to mutual_exclude being made from the
first radio button's callback in a group of four radio buttons.
functionvarargout = radiobutton1_Callback(h,eventdata,handles,varargin)
off = [handles.radiobutton2,handles.radiobutton3,handles.radiobutton4];
mutual_exclude(off) % Continue with callback.
54
After setting the radio buttons to the appropriate state, the callback can
continue with its implementation-specific tasks.
Checkboxes:
Check boxes generate an action when clicked and indicate their state as
checked or not checked. Check boxes are useful when providing the user with a
number of independent choices that set a mode (e.g., display a toolbar or generate
callback function prototypes).
The Value property indicates the state of the check box by taking on the
value of the Max or Min property (1 and 0 respectively by default):
Value = Max, box is checked.
Value = Min, box is not checked.
You can determine the current state of a check box from within its
callback by querying the state of its Value property, as illustrated in the following
example.
function checkbox1_Callback(h,eventdata,handles,varargin)
if (get(h,'Value') == get(h,'Max'))
% then checkbox is checked-take approriate action
Else
% checkbox is not checked-take approriate action
end
Edit Text:
Edit text controls are fields that enable users to enter or modify text
strings. Use edit text when you want text as input. The String property contains
the text entered by the user.
To obtain the string typed by the user, get the String property in the
callback.
function edittext1_Callback(h,eventdata, handles,varargin)
user_string = get(h,'string'); % proceed with callback...
55
Obtaining Numeric Data from an Edit Test Component:
MATLAB returns the value of the edit text String property as a character
string. If you want users to enter numeric values, you must convert the characters
to numbers. You can do this using the str2double command, which converts
strings to doubles. If the user enters non-numeric characters, str2double returns
NaN.
You can use the following code in the edit text callback. It gets the value
of the String property and converts it to a double. It then checks if the converted
value is NaN, indicating the user entered a non-numeric character (isnan) and
displays an error dialog (errordlg).
function edittext1_Callback(h,eventdata,handles,varargin)
user_entry = str2double(get(h,'string'));
ifisnan(user_entry)
errordlg('You must enter a numeric value','BadInput','modal')
end % proceed with callback...
Triggering Callback Execution:
On UNIX systems, clicking on the menubar of the Figure window causes
the edit text callback to execute. However, on Microsoft Windows systems, if an
editable text box has focus, clicking on the menubar does not cause the editable
text callback routine to execute. This behavior is consistent with the respective
platform conventions. Clicking on other components in the GUI execute the
callback.
Static Text:
Static text controls displays lines of text. Static text is typically used to
label other controls, provide directions to the user, or indicate values associated
with a slider. Users cannot change static text interactively and there is no way to
invoke the callback routine associated with it.
56
Frames:
Frames are boxes that enclose regions of a Figure window. Frames can
make a user interface easier to understand by visually grouping related controls.
Frames have no callback routines associated with them and only uicontrols can
appear within frames (axes cannot).
Placing Components on Top of Frames:
Frames are opaque. If you add a frame after adding components that you
want to be positioned within the frame, you need to bring forward those
components. Use the Bring to Front and Send to Back operations in the Layout
menu for this purpose.
List Boxes:
List boxes display a list of items and enable users to select one or more
items.
The String property contains the list of strings displayed in the list box.
The first item in the list has an index of 1.
The Value property contains the index into the list of strings that
correspond to the selected item. If the user selects multiple items, then Value is a
vector of indices.
By default, the first item in the list is highlighted when the list box is first
displayed. If you don’t want any item highlight then set the Value property to
empty, [].
The ListboxTop property defines which string in the list displays as the
top most items when the list box is not large enough to display all list entries.
ListboxTop is an index into the array of strings defined by the String property and
must have a value between 1 and the number of strings. Noninteger values are
fixed to the next lowest integer.
57
Single or Multiple Selections:
The values of the Min and Max properties determine whether users can
make single or multiple selections:
If Max - Min > 1, then list boxes allow multiple item selection.
If Max - Min <= 1, then list boxes do not allow multiple item selection.
Selection Type:
Listboxes differentiate between single and double clicks on an item and
set the Figure SelectionType property to normal or open accordingly. See
Triggering Callback Execution for information on how to program multiple
selections.
Triggering Callback Execution:
MATLAB evaluates the list box's callback after the mouse button is
released or a keypress event (including arrow keys) that changes the Value
property (i.e., any time the user clicks on an item, but not when clicking on the list
box scrollbar). This means the callback is executed after the first click of a
double-click on a single item or when the user is making multiple selections.
In these situations, you need to add another component, such as a Done
button (push button) and program its callback routine to query the list box Value
property (and possibly the Figure SelectionType property) instead of creating a
callback for the list box. If you are using the automatically generated application
M-file option, you need to either Set the list box Callback property to the empty
string ('') and remove the callback subfunction from the application M-file. Leave
the callback subfunction stub in the application M-file so that no code executes
when users click on list box items.
The first choice is best if you are sure you will not use the list box callback
and you want to minimize the size and efficiency of the application M-file.
However, if you think you may want to define a callback for the list box at some
time, it is simpler to leave the callback stub in the M-file.
58
Popup Menus:
Popup menus open to display a list of choices when users press the arrow.
The String property contains the list of string displayed in the popup menu. The
Value property contains the index into the list of strings that correspond to the
selected item. When not open, a popup menu displays the current choice, which is
determined by the index contained in the Value property. The first item in the list
has an index of 1.
Popup menus are useful when you want to provide users with a number of
mutually exclusive choices, but do not want to take up the amount of space that a
series of radio buttons requires.
Programming the Popup Menu:
You can program the popup menu callback to work by checking only the
index of the item selected (contained in the Value property) or you can obtain the
actual string contained in the selected item.
This callback checks the index of the selected item and uses a switch
statement to take action based on the value. If the contents of popup menu are
fixed, then you can use this approach.
functionvarargout = popupmenu1_Callback(h,eventdata,handles,varargin)
val = get(h,'Value');
switchval
case 1 % The user selected the first item
case 2 % The user selected the second item, etc.
This callback obtains the actual string selected in the popup menu. It uses
the value to index into the list of strings. This approach may be useful if your
program dynamically loads the contents of the popup menu based on user action
and you need to obtain the selected string. Note that it is necessary to convert the
value returned by the String property from a cell array to a string.
functionvarargout = popupmenu1_Callback(h,eventdata,handles,varargin)
val = get(h,'Value');
string_list = get(h,'String');
59
selected_string = string_list{val}; % convert from cell array to string, etc.
Enabling or Disabling Controls:
You can control whether a control responds to mouse button clicks by
setting the Enable property. Controls have three states:
On - The control is operational
Off - The control is disabled and its label (set by the string property) is
grayed out.
Inactive - The control is disabled, but its label is not grayed out.
When a control is disabled, clicking on it with the left mouse button does
not execute its callback routine. However, the left-click causes two other callback
routines to execute:
First the Figure WindowButtonDownFcn callback executes. Then the
control's ButtonDownFcn callback executes.
A right mouse button click on a disabled control posts a context menu, if
one is defined for that control. See the Enable property description for more
details.
Axes:
Axes enable your GUI to display graphics (e.g., graphs and images). Like
all graphics objects, axes have properties that you can set to control many aspects
of its behavior and appearance. See Axes Properties for general information on
axes objects.
Axes Callbacks:
Axes are not uicontrol objects, but can be programmed to execute a
callback when users click a mouse button in the axes. Use the axes
ButtonDownFcn property to define the callback.
60
Plotting to Axes in GUIs:
GUIs that contain axes should ensure the Command-line accessibility
option in the Application Options dialog is set to Callback (the default). This
enables you to issue plotting commands from callbacks without explicitly
specifying the target axes.
GUIs with Multiple Axes:
If a GUI has multiple axes, you should explicitly specify which axes you
want to target when you issue plotting commands. You can do this using the axes
command and the handles structure. For example, ‘axes(handles.axes1)’ makes
the axes whose Tag property is axes1 the current axes, and therefore the target for
plotting commands. You can switch the current axes whenever you want to target
different axes. See GUI with Multiple Axes for an example that uses two axes.
Figure:
Figures are the windows that contain the GUI you design with the Layout
Editor. See the description of Figure properties for information on what Figure
characteristics you can control.
61
CHAPTER 7
RESULTS AND CONCLUSION
7.1 RESULTS:
The results of the project are shown in the following figures:
1. Initialize graphic user interface.
Figure 7.1 Project graphic user interface.
62
2. Click on query button and select an unknown face image.
Figure 7.2 Project graphic user interface.
3. Unknown face image loaded into axes1.
Figure 7.3 Project graphic user interface.
63
4. Click on training button and Eigen faces of database images will be calculated.
Figure 7.4 Project graphic user interface.
5. Click on Feature extraction button and unknown face image features gets
extracted.
Figure 7.5 Project graphic user interface.
64
6. Click on query button and select an unknown palm image.
Figure 7.6 Project graphic user interface.
7. Click on feature extraction button and Palm features get extracted.
Figure 7.7 Project graphic user interface.
65
8. Click on feature matching button and Euclidian distance between unknown
images and database images get calculated. If Euclidian distance is less than
threshold value then the person is genuine and Authentic is displayed on GUI as
shown below.
Figure 7.8 Project graphic user interface.
9. If the Euclidian distance is greater than the threshold value then the person is not
genuine and Not Authentic is displayed on GUI as shown below.
Figure 7.9 Project graphic user interface.
66
7.2 CONCLUSION:
The project “FACE RECOGNITION AND PALMPRINT IDENTIFICATION
FOR AUTHENTICATION PROCESS” has been successfully designed and simulated
using Matlab software.
7.3 FUTURE SCOPE OF THE PROJECT:
This project can be applied in many areas. Some of them are given as follows:
Law enforcement application: Face and palm recognition technology is primarily
used in law enforcement applications, especially mug shot albums (static
matching) and video surveillance (real-time matching by video image sequences).
In transactional authentication: static matching of photographs on credit cards,
ATM cards and photo ID to real-time.
In document control: digital chip in passports and driver’s licenses.
In computer security: user access verification.
In physical access control: smart doors.
In voter registration: election accuracy.
In time and attendance: entry and exit information.
In computer games: a virtual “you” plays against virtual opponents.
Also this project can be interfaced with other biometric system to provide more
authentications.
67
References
[1] M. Turk, A. Pentland, Eigenfaces for Recognition, Journal of Cognitive
Neurosicence, Vol. 3, No. 1, 1991, pp. 71-86.
[2] M.A. Turk, A.P. Pentland, Face Recognition Using Eigenfaces, Proceedings of the IEEE
Conference on Computer Vision and Pattern Recognition, 3-6 June 1991, Maui, Hawaii, USA,
pp. 586-591.
[3] A. Pentland, B. Moghaddam, T. Starner, View-Based and Modular Eigenspaces for Face
Recognition, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,
21-23 June 1994, Seattle, Washington, USA, pp. 84-91.
[4] H. Moon, P.J. Phillips, Computational and Performance aspects of PCA-based Face
Recognition Algorithms, Perception, Vol. 30, 2001, pp. 303-321.
[5] D. Swets, J. Weng, "Using Discriminant Eigenfeatures for Image Retrieval ", IEEE
Transactions on Pattern Analysis and Machine Intelligence, 18(8), pp. 831-836, 1996.
[6] Ilker Atalay, “Face recognition using eigenfaces,” M.Sc. Thesis, ISTANBUL Technical
University, Institute of science and technology, January, 1996.
[7] David Zhang, Palmprint Authentication, Kluwer Academic Publishers: Boston, 2004,
pp,195.
[8] Edward Wong Kie Yih, G. Sainarayanan, Ali Chekima, Narendra G, Palmprint
Identification Using Sequential Modified Haar Wavelet Energy, IEE-International
Conference and Signal processing, Communications and Networking, Madras Institute of
technology, Anna University Chennai India, Jan 4-6, 2008. Pp411-416.
[9] Website: www.mathworks.com.
68
APPENDIX
CODE:
function varargout = gaitrec(varargin)
% GAITREC M-file for gaitrec.Figure
% GAITREC, by itself, creates a new GAITREC or raises the existing
% singleton*.
%
% H = GAITREC returns the handle to a new GAITREC or the handle to
% the existing singleton*.
%
% GAITREC('CALLBACK',hObject,eventData,handles,...) calls the local
% function named CALLBACK in GAITREC.M with the given input arguments.
%
% GAITREC('Property','Value',...) creates a new GAITREC or raises the
% existing singleton*. Starting from the left, property value pairs are
% applied to the GUI before gaitrec_OpeningFunction gets called. An
% unrecognized property name or invalid value makes property application
% stop. All inputs are passed to gaitrec_OpeningFcn via varargin.
%
% *See GUI Options on GUIDE's Tools menu. Choose "GUI allows only one
% instance to run (singleton)".
%
% See also: GUIDE, GUIDATA, GUIHANDLES
% Edit the above text to modify the response to help gaitrec
% Last Modified by GUIDE v2.5 13-Apr-2010 08:20:57
% Begin initialization code - DO NOT EDIT
69
gui_Singleton = 1;
gui_State = struct('gui_Name', mfilename, ...
'gui_Singleton', gui_Singleton, ...
'gui_OpeningFcn', @gaitrec_OpeningFcn, ...
'gui_OutputFcn', @gaitrec_OutputFcn, ...
'gui_LayoutFcn', [] , ...
'gui_Callback', []);
if nargin && ischar(varargin{1})
gui_State.gui_Callback = str2func(varargin{1});
end
if nargout
[varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:});
else
gui_mainfcn(gui_State, varargin{:});
end
% End initialization code - DO NOT EDIT
% --- Executes just before gaitrec is made visible.
function gaitrec_OpeningFcn(hObject, eventdata, handles, varargin)
% This function has no output args, see OutputFcn.
% hObject handle to Figure
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
% varargin command line arguments to gaitrec (see VARARGIN)
% Choose default command line output for gaitrec
handles.output = hObject;
a = ones(256,256);
axes(handles.axes1);
70
imshow(a);
axes(handles.axes2);
imshow(a);
% % % b = ones(64,150);
axes(handles.axes3);
imshow(a);
% Update handles structure
guidata(hObject, handles);
% UIWAIT makes gaitrec wait for user response (see UIRESUME)
% uiwait(handles.Figure1);
% --- Outputs from this function are returned to the command line.
function varargout = gaitrec_OutputFcn(hObject, eventdata, handles)
% varargout cell array for returning output args (see VARARGOUT);
% hObject handle to Figure
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
% Get default command line output from handles structure
varargout{1} = handles.output;
% --- Executes on button press in query.
function query_Callback(hObject, eventdata, handles)
% hObject handle to query (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles
[filename, pathname] = uigetfile('*.tif', 'Pick an M-file');
if isequal(filename,0) || isequal(pathname,0)
disp('User pressed cancel')
else
disp(['User selected ', fullfile(pathname, filename)])
end
71
inp=imread(fullfile(pathname,filename));
[r c p]=size(inp);
if p==3
inp=rgb2gray(inp);
end
axes(handles.axes1);
imshow(inp);
handles.inp = inp;
handles.filename = filename;
% Update handles structure
guidata(hObject, handles);
% --- Executes on button press in training.
function trainin_Callback(hObject, eventdata, handles)
% hObject handle to trainin (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
T = [];
for i = 1 : 20
% I have chosen the name of each image in databases as a corresponding
% number. However, it is not mandatory!
str = int2str(i);
str = strcat(str,'.jpg');
img = imread(str);
img = rgb2gray(img);
[irow icol] = size(img);
temp = reshape(img',irow*icol,1); % Reshaping 2D images into 1D image vectors
T = [T temp]; % 'T' grows after each turn
end
[m, A, Eigenfaces] = EigenfaceCore(T);
handles.m = m;
72
handles.A = A;
handles.Eigenfaces = Eigenfaces;
warndlg('Process Completed');
% Update handles structure
guidata(hObject, handles);
% --- Executes on button press in extratfeature.
function extratfeature_Callback(hObject, eventdata, handles)
% hObject handle to extratfeature (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
m = handles.m;
A = handles.A;
inp = handles.inp;
filename = handles.filename;
Eigenfaces = handles.Eigenfaces;
[OutputName h Recognized_index MLresult] = Recognition(filename, m, A, Eigenfaces,
inp);
handles.Recognized_index = Recognized_index;
handles.OutputName = OutputName;
handles.MLresult = MLresult;
handles.h = h;
warndlg('Process Completed');
% Update handles structure
guidata(hObject, handles);
% --- Executes on button press in palminput.
function palminput_Callback(hObject, eventdata, handles)
% hObject handle to palminput (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
73
[filename path2] = uigetfile('*.bmp','Pick an Image File');
if filename==0
warndlg('User Pressed Cancel');
else
a = imread(filename);
axes(handles.axes2);
imshow(a);
handles.pamin = a;
handles.filename = filename;
end
% Update handles structure
guidata(hObject, handles);
% --- Executes on button press in featureextract.
function featureextract_Callback(hObject, eventdata, handles)
% hObject handle to featureextract (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
% % % sharpened = handles.pamin;
filename = handles.filename;
len = length(filename);
newfile = filename(1:len-4);
[stat,mess]=fileattrib([newfile '.mat']);
if stat == 1
load (strcat(newfile,'.mat'))
handles.queryFV1 = queryFV1;
handles.queryFV_comb1 = queryFV_comb1;
else
input = imread(filename);
[queryFV, queryFV_comb] = Stransform(input);
74
queryFV1 = queryFV;
queryFV_comb1 = queryFV_comb;
handles.queryFV1 = queryFV1;
handles.queryFV_comb1 = queryFV_comb1;
save(newfile, 'queryFV1','queryFV_comb1');
end
guidata(hObject, handles);
warndlg('Feature Extraction is Completed');
% --- Executes on button press in featurematch.
function featurematch_Callback(hObject, eventdata, handles)
% hObject handle to featurematch (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
OutputName = handles.OutputName;
h = handles.h;
Eu=(h/1e+15);
MLresult = handles.MLresult;
inp = handles.inp;
Recognized_index = handles.Recognized_index;
if Recognized_index<11
indresult = 1;
else
indresult = 0;
end
filename = handles.filename ;
queryFV = handles.queryFV1 ;
if(filename=='rt1.bmp')
ind1=1;
else
ind1=0;
75
end
queryFV_comb = handles.queryFV_comb1;
file = dir(fullfile(cd,'*.bmp'));
len_file = length(file);
for i=1:len_file
imagename = file(i).name;
dist(i).file = imagename;
len = length(imagename);
newfile = imagename(1:len-4);
[stat,mess]=fileattrib([newfile '.mat']);
if stat == 1
load (strcat(newfile,'.mat'))
databaseFV = queryFV1;
databaseFV_comb = queryFV_comb1;
else
database_image = imread(imagename);
[databaseFV1,databaseFV_comb1] = Stransform(database_image);
databaseFV = databaseFV1;
databaseFV_comb = databaseFV_comb1;
queryFV1 = databaseFV;
queryFV_comb1 = databaseFV_comb;
save(newfile, 'queryFV1','queryFV_comb1');
end
[distanceFv, distanceFv_comb] = Euclidean(queryFV, databaseFV,
queryFV_comb, databaseFV_comb);
Euclidean_Dist_Fv(i) = distanceFv; %#ok<AGROW>
d = Euclidean_Dist_Fv(i);
Euclidean_Dist_Fv_comb(i) = distanceFv_comb;
end
[value ind] = sort(Euclidean_Dist_Fv/(10^3));
rec_imag = dist(ind(1)).file;
76
if MLresult>1
if value(1)<=0.565;
if (ind1==1)&&(MLresult>1)
set(handles.text2,'string','Authenticated');
axes(handles.axes3);
imshow(inp);
else
set(handles.text2,'string','Not Authenticated');
end
else
set(handles.text2,'string','Not Authenticated');
end
else
set(handles.text2,'string','Not Authenticated');
end
guidata(hObject, handles);
% --- Executes on button press in clear.
function clear_Callback(hObject, eventdata, handles)
% hObject handle to clear (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
a = ones(256,256);
axes(handles.axes1);
imshow(a);
axes(handles.axes2);
imshow(a);
axes(handles.axes3);
imshow(a);
set(handles.text2,'string',' ');
77
EigenfaceCore.m:
function [m, A, Eigenfaces] = EigenfaceCore(T)
% Use Principle Component Analysis (PCA) to determine the most
% discriminating features between images of faces.
%
% Description: This function gets a 2D matrix, containing all training image vectors
% and returns 3 outputs which are extracted from training database.
%
% Argument: T - A 2D matrix, containing all 1D image vectors.
% Suppose all P images in the training database
% have the same size of MxN. So the length of 1D
% column vectors is M*N and 'T' will be a MNxP 2D matrix.
%
% Returns: m - (M*Nx1) Mean of the training database
% Eigenfaces - (M*Nx(P-1)) Eigen vectors of the covariance matrix of
the training database
% A - (M*NxP) Matrix of centered image vectors
%
% See also: EIG
%%%%%%%%%%%%%%%%%%%%%%%% Calculating the mean image
m = mean(T,2); % Computing the average face image m = (1/P)*sum(Tj's) (j = 1 : P)
Train_Number = size(T,2);
%%%%%%%%%%%%%%%%%%%%%%%% Calculating the deviation of each
image from mean image
A = [];
for i = 1 : Train_Number
temp = double(T(:,i)) - m; % Computing the difference image for each image in the
training set Ai = Ti - m
78
A = [A temp]; % Merging all centered images
end
%%%%%%%%%%%%%%%%%%%%%%%% Snapshot method of Eigenface methos
% We know from linear algebra theory that for a PxQ matrix, the maximum
% number of non-zero eigenvalues that the matrix can have is min(P-1,Q-1).
% Since the number of training images (P) is usually less than the number
% of pixels (M*N), the most non-zero eigenvalues that can be found are equal
% to P-1. So we can calculate eigenvalues of A'*A (a PxP matrix) instead of
% A*A' (a M*NxM*N matrix). It is clear that the dimensions of A*A' is much
% larger that A'*A. So the dimensionality will decrease.
L = A'*A; % L is the surrogate of covariance matrix C=A*A'.
[V D] = eig(L); % Diagonal elements of D are the eigenvalues for both L=A'*A and
C=A*A'.
%%%%%%%%%%%%%%%%%%%%%%%% Sorting and eliminating eigenvalues
% All eigenvalues of matrix L are sorted and those who are less than a
% specified threshold, are eliminated. So the number of non-zero
% eigenvectors may be less than (P-1).
L_eig_vec = [];
for i = 1 : size(V,2)
if( D(i,i)>1 )
L_eig_vec = [L_eig_vec V(:,i)];
end
end
%%%%%%%%%%%%%%%%%%%%%%%% Calculating the eigenvectors of
covariance matrix 'C'
% Eigenvectors of covariance matrix C (or so-called "Eigenfaces")
79
% can be recovered from L's eiegnvectors.
Eigenfaces = A * L_eig_vec; % A: centered image vectors
Euclidean.m:function [distanceFv, distanceFv_comb] = Euclidean(queryFV, databaseFV,
queryFV_comb, databaseFV_comb)
len2 = size(queryFV_comb);
k=1;
for j=1:6
for i =1:len1(2)
dist(k) = (queryFV(j,i)- databaseFV(j,i)).^2;
k =k+1;
end
end
ii=1;
len3 = length(dist);
for I =1:len3
if dist(I)>=0
Dist(ii) = dist(I);
ii=ii+1;
end
end
distanceFv = sqrt(sum(Dist));
%-----------------
l=1;
for J=1:5
for i =1:len2(2)
dist1(l) = (queryFV_comb(J,i)- databaseFV_comb(J,i)).^2;
l =l+1;
end
80
end
ii=1;
len4 = length(dist1);
for JJ =1:len4
if dist1(JJ)>=0
Dist1(ii) = dist1(JJ);
ii=ii+1;
end
end
if ii==1
Dist1(1)=0;
end
distanceFv_comb = sqrt(sum(Dist1));
Recognition.m:
function [OutputName Euc_dist_min Recognized_index MLresult] =
Recognition(TestImage, m, A, Eigenfaces,a)
% Recognizing step....
%
% Description: This function compares two faces by projecting the images into facespace
and
% measuring the Euclidean distance between them.
%
% Argument: TestImage - Path of the input test image
%
% m - (M*Nx1) Mean of the training
% database, which is output of 'EigenfaceCore' function.
%
% Eigenfaces - (M*Nx(P-1)) Eigen vectors of the
81
% covariance matrix of the training
% database, which is output of 'EigenfaceCore' function.
%
% A - (M*NxP) Matrix of centered image
% vectors, which is output of 'EigenfaceCore' function.
%
% Returns: OutputName - Name of the recognized image in the training
database.
%
% See also: RESHAPE, STRCAT
%%%%%%%%%%%%%%%%%%%%%%%% Projecting centered image vectors into
facespace
% All centered images are projected into facespace by multiplying in
% Eigenface basis's. Projected vector of each face will be its corresponding
% feature vector.
ProjectedImages = [];
Train_Number = size(Eigenfaces,2);
for i = 1 : Train_Number
temp = Eigenfaces'*A(:,i); % Projection of centered images into facespace
ProjectedImages = [ProjectedImages temp];
end
%%%%%%%%%%%%%%%%%%%%%%%% Extracting the PCA features from test
image
InputImage = imread(TestImage);
InputImage=imresize(InputImage,[200 180]);
temp = InputImage(:,:,1);
[irow icol] = size(temp);
82
InImage = reshape(temp',irow*icol,1);
Difference = double(InImage)-m; % Centered test image
ProjectedTestImage = Eigenfaces'*Difference; % Test image feature vector
%%%%%%%%%%%%%%%%%%%%%%%% Calculating Euclidean distances
% Euclidean distances between the projected test image and the projection
% of all centered training images are calculated. Test image is
% supposed to have minimum distance with its corresponding image in the
% training database.
[MLresult] = finding(a);
Euc_dist = [];
for i = 1 : Train_Number
q = ProjectedImages(:,i);
temp = ( norm( ProjectedTestImage - q ) )^2;
Euc_dist = [Euc_dist temp];
end
[Euc_dist_min , Recognized_index] = min(Euc_dist);
OutputName = strcat(int2str(Recognized_index),'.jpg');
Stransform.m:function [FV, FV_comb] = Stransform(rt)
%rt = imread('img_0286.jpg');
bm = imresize(rt,[256 256]);
bm = double(bm);
% bm = rgb2gray(bm1);
% bm=rt;
for i = 1:6
[A H V D] = sequential_Haar(bm);
% [A H V D] = dwt2(bm,'haar');
k=1;
83
% temp(i) = sum(sum(A));
[row col] = size(H);
for ii = 1:row/4:row
for jj = 1:col/4:col
blockH(:,:,k) = double(H((ii:row/4-1+ii),(jj:col/4-1+jj)))
blockV(:,:,k) = double(V((ii:row/4-1+ii),(jj:col/4-1+jj)));
blockD(:,:,k) = double(D((ii:row/4-1+ii),(jj:col/4-1+jj)));
k = k+1;
end
end
% The MHE energy feature for every detail coefficients are
for I = 1:16 % MHEi,j = [MHEi,j,1,...MHEi,j,16];---------(3)
mheH(i,I) =double( sum(sum(blockH(:,:,I).^2)));
mheV(i,I) = double(sum(sum(blockV(:,:,I).^2)));
mheD(i,I) =double( sum(sum(blockD(:,:,I).^2)));
end
% DetailHVD -> combination of H V D co-effs. in level-------(4)
HDetailHVD(i,:) = double(mheH(i,:));
VDetailHVD(i,:) = double(mheV(i,:));
DDetailHVD(i,:) = double(mheD(i,:));
DetailHVD_sum = double((sum(HDetailHVD(i,:))+sum(VDetailHVD(i,:))
+sum(DDetailHVD(i,:))));
% the detail HVD in diffrnt decomposition levels are normalized(5)
% tempH = (HDetailHVD(i,:)./DetailHVD_sum);
if DetailHVD_sum
HD(i,(1:16)) = HDetailHVD(i,:)./DetailHVD_sum;
VD(i,(1:16)) = VDetailHVD(i,:)./DetailHVD_sum;
84
DD(i,(1:16)) = DDetailHVD(i,:)./DetailHVD_sum;
else
HD(i,(1:16)) = HDetailHVD(i,:);
VD(i,(1:16)) = VDetailHVD(i,:);
DD(i,(1:16)) = DDetailHVD(i,:);
end
FV(i,:)=[double(HD(i,:)) double(VD(i,:)) double(DD(i,:))];
FV(i,:) = double(FV(i,:));
switch i
case 1
A1 = A;
H1 = H;
V1 = V;
D1 = D;
case 2
A2 = A;
H2 = H;
V2 = V;
D2 = D;
case 3
A3 = A;
H3 = H;
V3 = V;
D3 = D;
case 4
A4 = A;
H4 = H;
V4 = V;
D4 = D;
85
case 5
A5 = A;
H5 = H;
V5 = V;
D5 = D;
case 6
A6 = A;
H6 = H;
V6 = V;
D6 = D;
end
bm = A;
blockH =[];
blockV =[];
blockD =[];
end
%
% level6 = [A6,H6;V6,D6];
% level5 = [level6,H5;V5,D5];
level4 = [A4,H4;V4,D4];
level3 = [level4,H3;V3,D3];
level2 = [level3,H2;V2,D2];
level1 = [level2,H1;V1,D1];
FV_comb(1,:)=(FV(1,:)+FV(2,:));
FV_comb(2,:)=(FV(1,:)+FV(2,:)+FV(3,:));
FV_comb(3,:)=(FV(1,:)+FV(2,:)+FV(3,:)+FV(4,:));
FV_comb(4,:)=(FV(1,:)+FV(2,:)+FV(3,:)+FV(4,:)+FV(5,:));
86
FV_comb(5,:)=(FV(1,:)+FV(2,:)+FV(3,:)+FV(4,:)+FV(5,:)+FV(6,:));
% figure;imshow(level1,[]);
% tem = [A1,H1;V1,D1];
% figure;imshow(tem,[]);
function[A H V D] = sequential_Haar(fim1)
% if level==1
% temp=fim1;
% return;
% end
[ r c ] = size(fim1);
% even = zeros(r,(c/2));
%first level decomposition
%one even dimension
for j = 1:1:r
a = 2;
for k = 1:1:(c/2)
Yeven(j,k) = fim1(j,a);
a = a+2;
end
end
%one odd dim
odd = zeros(r,(c/2));
for j = 1:1:r
a = 1;
for k = 1:1:(c/2)
Yodd(j,k) = fim1(j,a);
a = a+2;
end
end
[ lenr lenc ] = size(Yodd) ;
87
%one dim haar
for j = 1:1:lenr
for k = 1:1:lenc
Ylow(j,k) = Yodd(j,k)-Yeven(j,k);
Yhigh(j,k) = round((Yeven(j,k)+Yodd(j,k))/2);
end
end
%2nd dimension
[len2r len2c ] = size(Ylow);
for j = 1:1:(len2c)
a=2;
for k = 1:1:(len2r/2)
%even separation of one dim
Yloweven(k,j) = Ylow(a,j);
Yhgheven(k,j) = Yhigh(a,j);
a=a+2;
end
end
%odd separtion of one dim
for j = 1:1:(len2c)
a=1;
for k =1:1:(len2r/2)
Ylowodd(k,j) = Ylow(a,j);
Yhghodd(k,j) = Yhigh(a,j);
a = a+2;
end
end
%2d haar
[ len12r len12c ] = size(Ylowodd) ;
for j = 1:1:len12r
for k = 1:1:len12c
88
%2nd level hh
A(j,k) = round((Yhgheven(j,k)+Yhghodd(j,k))/2);
%2nd level hl
H(j,k) = Yhgheven(j,k)-Yhghodd(j,k);
%2nd level lh
V(j,k) = round((Ylowodd(j,k)+Yloweven(j,k))/2);
%2nd level ll
D(j,k) = Yloweven(j,k) - Ylowodd(j,k);
end
end
% level=level-1;
% figure;imshow(A,[]);
% figure;imshow(H,[]);
% figure;imshow(V,[]);
% figure;imshow(D,[]);
temp = [A,H];
temp1=[V,D];
out = [temp;temp1];
return;
89
Recommended