5/24/2018 System Requirement Specification
1/87
BioMetric Criminals Identification System
Submitted By:
Abdul Bari Malik 905-FBAS/BSSE/F09
Asif Sharif 909-FBAS/BSSE/F09
ProjectSupervisor:
Mr. Syed Muhammad Saqlain
Assistant Professor
Department of Computer Science and Software Engineering
Faculty of Basic and Applied Sciences
International Islamic University, Islamabad
2013
5/24/2018 System Requirement Specification
2/87
I n the name of Allah (SWT) who is most Beneficent and most
Merciful
5/24/2018 System Requirement Specification
3/87
International Islamic University, Islamabad
Faculty of Basic and Applied SciencesDepartment of Computer Sciences and Software Engineering
FINAL APPROVAL
Dated:
It is certified that we have read the project submitted by Abdul Bari and Asif Sharif and it
is our judgment that this project is of sufficient standard to warrant its acceptance by
International Islamic University, Islamabad for the Bachelors Degree in Software
Engineering.
COMMITTEE
External Examiner
Mr.Muhammad Nadeem
Assistant Professor,
DCS & SE, FBAS,
International Islamic University, Islamabad
Internal ExaminerMr. Zulqarnain Hashmi
Lecturer,DCS & SE, FBAS,
International Islamic University, Islamabad
Supervisor
Mr. Syed Muhammad Saqlain
Assistant ProfessorDCS & SE, FBAS,
International Islamic University, Islamabad
5/24/2018 System Requirement Specification
4/87
BCIS Dissertation
i
A dissertation submitted to
DEPARTMENT OF COMPUTER SCIENCES
AND
SOFTWARE ENGNEERING
INTERNATIONAL ISLAMIC UNIVERSITY ISLAMABAD,
As partial fulfillment of the requirement of Bachelors degree in
Computer Sciences
5/24/2018 System Requirement Specification
5/87
BCIS Dedication
ii
DEDICATION
This thesis is dedicated to our parents who have been a great source of motivation and
inspiration and supported us all the way since the beginning of our studies.
5/24/2018 System Requirement Specification
6/87
BCIS Declaration
iii
DECLARATION
We hereby, declare that this project report, neither as a hole nor as a part there of has been
copied out from any source but in fact if any material has been used ,it is used as the
concept of original material taken from the web and all other references are mentioned for
subsequent sections. It is further declared that we have completed our work on the basis
of our personal efforts and under the sincere guidance of my teachers. If any part of this
part of work is proved to be copied out from any source, we will stand by the
consequences. No portion of Work presented in this report has been submitted in support
of any application for any degree or qualification of this or any other university.
Abdul Bari Malik
905-FBAS/ BSSE /F09
Asif Sharif
909-FBAS/ BSSE/ F09
5/24/2018 System Requirement Specification
7/87
BCIS Acknowledgment
iv
ACKNOWLEDGEMENTS
To Him belongs the dominion of the Heaven and the Earth, it is He who gives life and
death and He has power over all things (Al-Quran).
First of all we would like to thank the Almighty Allah, who is invincible and He, who
blessed us with the abilities to complete this project. The completion of this project was
only dependent on His blessings. Whenever we faced any problem we always looked
forward towards Allah The Almighty for His blessings to help us in standing large and be
determinant in the problem.
There are a number of people without whom this project might not have been written, and
to whom we are greatly indebted.
We Especially Thanks to our parents who helped us in our most difficult times and it is
due to their untiring effort that we are at this position.
We would also like to thanks all our Professors throughout our Bachelors course for
giving us this opportunity to learn and grow towards attainment of this degree.
Whenever one has a difficult task ahead, the first problem is how you climb and see
through the first stair because after the first step ahead, one always tries to move up and
up. So when we decided to take Biometric Person Identification using Face
Recognition as our final project this was our move towards the first stair and we were
fully supported to step up on the first stair by our teacher Mr. Syed Muhammad Saqlain
who later wore the gown of our project supervisor. Without his instructions and support
we wouldnt be able to move towards our destination.
Apart from the above honorable personalities we owe loving thanks to all our batch mates
who supported us and always gave us new ideas to include and were there as our
CRITICS, which helped us to climb till the last stair.
Finally, we would like to thank everybody who was important to the successful
realization of project, as well as expressing our apology that we could not mention
personally one by one.
Abdul Bari Malik
905-FBAS/ BSSE /F09
Asif Sharif
909-FBAS/ BSSE/ F09
5/24/2018 System Requirement Specification
8/87
BCIS Abstract
vi
PROJECT IN BRIEF
Project Title: BioMetricCriminals Identification System
Objective: To develop a person identification system along with
their profiles on the basis of their faces
Undertaken By: Abdul Bari
Asif Sharif
Supervised By: Mr. Syed Muhammad Saqlain
Date Started: April 8,2013
Date Completed: December 29, 2013
Language: JAVA
Tool(s) Used: Eclipse IDE
Microsoft Office 2010
Sparx Enterprise Architect 7.5
Visual Paradigm for UML 11
Operation System: Microsoft Windows7
System Used: Processor Dual Core, 2.80GH
5/24/2018 System Requirement Specification
9/87
BCIS Abstract
vii
ABSTRACT
This system is able to verify a persons identity through his/her facial features. This is
achieved by using Eigen faces, Principal Component Analysis (PCA) and OpenCV
Object Detection.
Persons are enrolled into the system by providing their personal information and frontal
face samples. These face samples are verified by detecting face in the face sample and
then preprocessing is performed on them. Once the persons are enrolled, the system is
trained on the preprocessed face samples using PCA (Principle Component Analysis).
Now it can be used to recognize persons, the person to be recognized comes in front ofcamera, his/her face sample is taken, preprocessed and compared with the enrolled
images and the result is returned as matched or not matched.
5/24/2018 System Requirement Specification
10/87
BCIS Table- of-Contents
vii
Table of Contents
CHAPTER 1 ........................................................................................................................ 11.
Introduction .................................................................................................................. 2
1.1. Problem Statement ................................................................................................ 21.2. Purpose .................................................................................................................. 31.3. Scope ..................................................................................................................... 31.4. Definition, Acronyms, and Abbreviations ............................................................ 31.5. Objective of the Project ......................................................................................... 31.6. Positioning ............................................................................................................. 3
1.6.1. Problem Statement ......................................................................................... 31.7. Main Project Achievements .................................................................................. 41.8. Product Overview .................................................................................................. 4
1.8.1. Product Perspective ........................................................................................ 41.8.2. Assumptions and Dependencies .................................................................... 41.8.3. Cost and Pricing ............................................................................................. 41.8.4. System Model ................................................................................................ 5...................................................................................................................................... 5
1.9. Product Features .................................................................................................... 51.9.2. User Characteristics ....................................................................................... 61.9.3. Functional Requirement ................................................................................. 61.9.4. Organization of Report .................................................................................. 61.9.5. Tools and Technologies ................................................................................. 7
CHAPTER 2 ........................................................................................................................ 82. Literature Survey ......................................................................................................... 9
2.1. Projects Developed ................................................................................................ 92.1.1. ImageWare Software, Inc. ............................................................................. 92.1.2. VeriLook SDK (Face identification for stand-alone or Web applications) . 102.1.3. TrueFace ...................................................................................................... 102.1.4. FaceSDK ...................................................................................................... 102.1.5. Intelligent vision systems, Inc...................................................................... 112.1.6. FaceIt Pc....................................................................................................... 11
2.2. Techniques for Face Recognition ........................................................................ 12
5/24/2018 System Requirement Specification
11/87
BCIS Table- of-Contents
viii
2.2.1. Geometric Features ...................................................................................... 122.2.2. Eigen Faces .................................................................................................. 122.2.3. Graph Matching ........................................................................................... 132.2.4. Fisher Faces (Linear Discriminate Analysis algorithm) .............................. 132.2.5. Template Matching ...................................................................................... 142.2.6. Neural Network Approach ........................................................................... 142.2.7. Stochastic Modeling..................................................................................... 162.2.8. N-tuple classifiers ........................................................................................ 16
2.3. Basic Concepts .................................................................................................... 182.3.1. Preprocessing ............................................................................................... 182.3.2. Object Detection .......................................................................................... 182.3.3. Face Recognition ......................................................................................... 19
3. Analysis...................................................................................................................... 293.1. Use Cases Model ................................................................................................. 29
3.1.1. Use Case Diagram........................................................................................ 293.1.2. Use Cases Descriptions in Brief .................................................................. 303.1.3. Use Case Description in Detail .................................................................... 313.1.4. System Sequence Diagrams ......................................................................... 35
3.2. Domain / Conceptual Model ............................................................................... 393.3. Activity Diagram ................................................................................................. 40
3.3.1. Admin Side Activity Diagram ..................................................................... 403.3.2. User Side Activity Diagram ......................................................................... 41
CHAPTER 4 ...................................................................................................................... 424. System Design ........................................................................................................... 43
4.1. Class Diagram ..................................................................................................... 444.2. Entity Relationship Diagram ............................................................................... 454.3. Sequence Diagrams ............................................................................................. 46
4.3.1. SD1: Enrollment .......................................................................................... 474.3.2. SD2: Preprocessing ...................................................................................... 474.3.3. SD3: Training............................................................................................... 484.3.4. SD4: Testing ................................................................................................ 48
CHAPTER 5 ...................................................................................................................... 495. Implementation .......................................................................................................... 50
5/24/2018 System Requirement Specification
12/87
BCIS Table- of-Contents
ix
5.1. Tools and Technology ......................................................................................... 515.2. Deployment Diagram .......................................................................................... 51
CHAPTER 6 ...................................................................................................................... 526. Software Testing ........................................................................................................ 53
6.1. Why is Software Tested?..................................................................................... 556.1.1. To Improve Quality: .................................................................................... 556.1.2. For Verification and Validation (V&V) ...................................................... 556.1.3. For Reliability Estimation ............................................................................ 55
6.2. Software Testing Techniques .............................................................................. 566.2.1. Test Case Design.......................................................................................... 566.2.2. White Box Testing ....................................................................................... 566.2.3. Basic Path Testing........................................................................................ 566.2.4. Regression Testing ....................................................................................... 566.2.5. Control Structure Testing ............................................................................. 576.2.6. Black Box Testing........................................................................................ 57
6.3. Software Testing Strategies ................................................................................. 576.3.1. Unit Testing ................................................................................................. 576.3.2. Integration Testing ....................................................................................... 576.3.3. Function Testing .......................................................................................... 586.3.4. System Testing ............................................................................................. 586.3.5. Acceptance Testing ...................................................................................... 58
6.4. BPIS Testing ....................................................................................................... 586.5. Test case .............................................................................................................. 586.6. Deriving Test Cases ............................................................................................ 596.7. Test Case Description.......................................................................................... 60
6.7.1. Loading of Menu.......................................................................................... 606.7.2. Enrollment Test Case ................................................................................... 606.7.3. User Input handling...................................................................................... 616.7.4. Face Detection ............................................................................................. 616.7.5. Database Inserting ........................................................................................ 626.7.6. Training ........................................................................................................ 626.7.7. Testing.......................................................................................................... 63
7. Conclusion ................................................................................................................. 65
5/24/2018 System Requirement Specification
13/87
BCIS Table- of-Contents
x
8. User Manual ............................................................................................................... 73Pc Minimum System Requirements............................................................................... 73
8.1. Main Window ...................................................................................................... 738.2. Enrollment Wizard Window 1 ............................................................................ 748.3. Enrollment Wizard Window 2 ............................................................................ 758.4. Recognition Window........................................................................................... 76
References .......................................................................................................................... 78
5/24/2018 System Requirement Specification
14/87
BCIS Table- of-Contents
xi
Table of Figures
Figure 2.1: Flowchart of Face Recognition using Eigen Faces ......................................... 22Figure 3.1: Use Case Diagram ........................................................................................... 29Figure 3.2: SSD1 Enrollment ............................................................................................. 37Figure 3.3: SSD2 Preprocessing ........................................................................................ 37Figure 3.4: SSD3 Training ................................................................................................. 38Figure 3.5: SSD4 Testing ................................................................................................... 38Figure 3.6: Domain Model ................................................................................................. 39Figure 3.7: Admin Side Activity Diagram ......................................................................... 40Figure 3.8: User Side Activity Diagram ............................................................................ 41Figure 4.1: Class Diagram ................................................................................................. 44Figure 4.2: Entity Relationship Diagram ........................................................................... 45Figure 4.3: Sequence Diagram of Enrollment ................................................................... 47Figure 4.4: Sequence Diagram of Preprocessing ............................................................... 47Figure 4.5: Sequence Diagram of Training ........................................................................ 48Figure 4.6: Sequence Diagram of Testing ......................................................................... 48Figure 5.1: Deployment Diagram ...................................................................................... 51
Table of Tables
Table 6.1: Main Menu Test Case ....................................................................................... 60Table 6.2: Enrollment Test Case ........................................................................................ 60Table 6.3:User Input Handling Test Case .......................................................................... 61Table 6.4: Face Detection Test Case.................................................................................. 61Table 6.5: Database Inserting Test Case ............................................................................ 62Table 6.6: Training Test Case ............................................................................................ 62Table 6.7: Testing Test Case .............................................................................................. 63
5/24/2018 System Requirement Specification
15/87
Chapter 1 Introduction
BCIS 1
CHAPTER 1
Introduction
5/24/2018 System Requirement Specification
16/87
Chapter 1 Introduction
BCIS 2
1. IntroductionThe capacity of human being to recognize faces is quite remarkable. It is nearly automatic
and so intelligent that it persists even when the shape of a persons face changes with the
passage of time. We are supposed to bring this property of Natural Intelligence in
Artificial Intelligence so that we could recognize faces electronically. There are no built
in features in computers to perform this task, we have to program it.
There are many approaches used for the identification of person in which the most
common are Username and Password to login a Personal Computer, PIN (Person
Identification Number) Code systems. For example ATMs, Token systems such as
National ID Card number or vehicles license etc. But there were many problems with
these methods like losing card, theft, forgetting password, cards getting washed out and
many other problems like this. Due to these problems, there has been developed
considerable interest in biometric identification systems, which use pattern characters.
1.1. Problem StatementConsidering of an image representing a frame taken from video stream, automatic face
recognition is a particularly complex task that involves detection and location of faces in
a cluttered background followed by normalization and recognition. The human face is avery challenging pattern to detect and recognize, because while its anatomy is rigid
enough so that all faces have the same structure, at the same time there are a lot of
environmental and personal factors affecting facial appearance. The main problem of face
recognition is large variability of the recorded images due to pose, illumination
conditions, facial expressions, use of cosmetics, different hairstyle, presence of glasses,
beard, etc. Images of the same individual taken at different times, may sometimes exhibit
more variability due to the aforementioned factors (intrapersonal variability), than images
of different individuals due to gender, race, age and individual variations (extra personal
variability). One way of coping with intrapersonal variations is including in the training
set images with such variations. And while this is a good practice for variations such as
facial expressions, use of cosmetics and presence of glasses or beard, it may not be
successful in case of illumination or pose variations.
5/24/2018 System Requirement Specification
17/87
Chapter 1 Introduction
BCIS 3
1.2. PurposeThis project is aimed to identify the persons. Here the technique is we already store some
images of the person in our database along with his details. These images are stored in
database record so to identify any person. If any image is matched then we predict that heis only the required person. Thus using this project it provides a very friendly
environment for users to identify persons very easy.
1.3. ScopeThe scope of the project is confined to store the image. When a person has to be identified
the images stored in the database are compared with the existing details.
1.4. Definition, Acronyms, and AbbreviationsBPIS: Biometric Person Identification System
1.5. Objective of the ProjectThe aim of this project is to come up with a simple and improved implementation for
person identification using personal computers. In this project, we have used for the
algorithms Eigen Faces and Principal Component Analysis (PCA) to determine
similarities between faces which consider different poses of a person and also to maintainthe profiles of the persons along with their faces.
1.6. Positioning1.6.1. Problem Statement
The Problem
of
Manually Identify the persons.
Affects Security agencies and investigation departments (employees).
The Impact of
which isAdministration needs to manually manage the record of the
persons and its hard and time consuming to identify the persons.
A successful
solution wouldbe
Proposed System will automatically manage the information of the
persons. And identify the persons by biometric techniques.
5/24/2018 System Requirement Specification
18/87
Chapter 1 Introduction
BCIS 4
1.7. Main Project AchievementsThe project achievements can be stated as follows:
Adaptation of two algorithms in JAVA that compute similarity between faces.PCA and Eigen Faces.
Insertion, Deletion and Updating of user profiles along with their faces. Development of a Graphical User Interface which allows user to carry out
experiment to test the system.
A set of experiment carried out with 4 different pictures of 5 to 10 differentsubjects and found the recognition result of the system to be 80%.
1.8. Product OverviewThe general description comprises of following phases
1.8.1. Product PerspectiveThe product is a standalone application which is proposed in desktop based, and a
windows service for executing product. It integrates with MySQL on one end, and
interfaces with a GUI package to provide the visual content. In between, business logic is
involved.
1.8.2. Assumptions and DependenciesAssuming that user must have Camera attached with the system whether it was a built-in
Webcam or separate USB Cam.
1.8.3. Cost and PricingTo be decided.
5/24/2018 System Requirement Specification
19/87
Chapter 1 Introduction
BCIS 5
1.8.4. System ModelThe following figure shows the system model.
1.9. Product FeaturesFollowing are the brief descriptions of the component of the system model.
1.9.1.1. EnrollmentIn this process, users personal info is entered and stored in the data base. Users frontal
face samples are taken. Face samples are verified through Face Detection. User is
provided with an option to save a sample. At least four samples of a user are taken.
1.9.1.2. TrainingClasses are computed and Eigen faces of classes are generated that were used as face
space for recognition process.
User Info Enrollment Face Samples
PCA
Eigen Vectors
Preprocessing
Eigen Values
Training
Recognition
5/24/2018 System Requirement Specification
20/87
Chapter 1 Introduction
BCIS 6
1.9.1.3. RecognitionFace sample is taken of the person to be recognized. Again the face sample is verified
through Face Detection. Face sample is preprocessed. Similarity scores are computed and
the best match is found.
1.9.2. User CharacteristicsRunning this software does not require any advance knowledge of computer. This
software is very user friendly so any person that has basic knowledge of computer can use
this software.
1.9.3. Functional RequirementFollowing are some basic requirements are described briefly:
Enrollment Taking the snapshots of the user Detecting face in the snapshot Maintaining the profile
Administration of user profile Insert Update Delete
Identification Taking the snapshot Detecting face Recognizing face
1.9.4. Organization of ReportChapter 2 contains the literature reviewed for the BPIS system. Chapter 3 explains the
basic concepts which include preprocessing, Object Detection and Face Recognition.
Chapter 4 explains Analysis and Design in BPIS system which has Use Cases, Use Case
Diagram, SSD (System Sequence Diagram), Domain Model and Class Diagram. Chapter
5 describes the Implementation Details; Chapter 6 describes the testing mechanisms.
Chapter 7 contains user manual. Chapter 8 and Chapter 9 tell us about challenges and
further issues in the development of project. Appendix A contains background
mathematics that is needed to understand the algorithms.
5/24/2018 System Requirement Specification
21/87
Chapter 1 Introduction
BCIS 7
1.9.5. Tools and TechnologiesThe main tools and technologies used in the development of BPIS are:
Eclipse IDE SQL server Web Camera SPARX Enterprise Architect
5/24/2018 System Requirement Specification
22/87
Chapter 2 Literature Survey & Basic Concepts
BCIS 8
CHAPTER 2
Literature Survey
And
Basic Concepts
5/24/2018 System Requirement Specification
23/87
Chapter 2 Literature Survey & Basic Concepts
BCIS 9
2. Literature SurveyThere is a lot of study done for the development of this project. There are many projects
developed in the area of biometrics and many techniques used for Face Recognition.
2.1. Projects Developed2.1.1. ImageWare Software, Inc.ImageWare Systems, Inc. (IWS) is a leader of cutting-edge, identity-based, credential
management solutions driven by biometric technology
Delivering next-generation biometrics as an interactive and Scalable cloud-based
solution
IWS brings together cloud and mobile technology to offer multi-factor authentication for
smart phone users and mobile clients. IWS supports multi-modal biometric authentication
including voice, fingerprint, and facial recognition. All of which can be combined
alongside other authentication and access control facilities including tokens, digital
certificates, passwords, and PINS, to provide the ultimate level of assurance and
accountability for corporate networks and Web applications from mobile phones and
tablets to PC desktop environment.
Key solution offerings:
Biometric Enginee 2.0
This is a scalable, agnostic multibiometric identity management solution that ensures only
valid individuals gain access to controlled areas or attains secure documents.
It is agnostic in biometric algorithm and hardware and can manage tens of millions of
individuals using standard off the-shelf hardware, minimizing the cost of ownership.
GoMobile interactive
It is a platform allowing any mobile application to be secured with biometric
identification. It enables a range of unprecedented activities from secure sharing of
sensitive information to the realization of a true mobile wallet.
The biometrics-based authentication gives users the confidence that their personal
information is secure, while the push marketing capabilities allow companies unparalleled
interactivity that can be personalized to the needs and interests of their customers.
5/24/2018 System Requirement Specification
24/87
Chapter 2 Literature Survey & Basic Concepts
BCIS 10
2.1.2. VeriLook SDK (Face identification for stand-alone or Webapplications)
NEUROtechnology has developed a face recognition algorithm. VeriLook facial
identification technology is designed for biometric systems developers and integrators.
The technology assures system performance and reliability with live face detection,
simultaneous multiple face recognition and fast face matching in 1-to-1 and 1-to-many
modes.
VeriLook is available as a software development kit that allows development of stand-
alone and Web-based solutions on Microsoft Windows, Linux, Mac OS X and Android
platforms.
VeriLook algorithm implements advanced face localization, enrollment and matching
using robust digital image processing algorithms with the same high recognition quality
on PCs, embedded and mobile devices. The algorithm is able to perform simultaneous
multiple face processing in a single frame, as well as live face detection and gender
classification. Facial feature points can be optionally extracted.
2.1.3. TrueFaceThis software is developed at Miros, Inc. Companys software allows for face recognition
through use of a video camera and a computer; site provides free demo of products to
demonstrate how your face can be used to access secure web pages; included its
information on TrueFace, a biometric that eliminates cards for check cashing ATMs.
2.1.4. FaceSDKFaceSDK is a software development kit that lets you add facial recognition
featuresto your desktop and web applications.
Compatible with 32- and 64-bit apps in multiple platforms (Windows, Mac and Linux)
FaceSDK enables you to use facial recognition technologyin your software, so that you
can develop apps to authenticate users with webcams, find matching faces on multiple
images, recognize people in graphic editors or create morphing and animation effects.
http://facesdk.en.softonic.com/http://facesdk.en.softonic.com/5/24/2018 System Requirement Specification
25/87
Chapter 2 Literature Survey & Basic Concepts
BCIS 11
FaceSDK can be used both on online and desktop applications, anywhere where accurate
facial recognition is required from funny face animation effects to biometric
authentication for high-security systems.
This SDK includes several samples in different programming languages (Microsoft
Visual C++, C#, Visual Basic and more) for testing purposes, as well as a demo to test all
the SDKs possibilities right away.
2.1.5. Intelligent vision systems, Inc.Product focus on face recognition (searches in data bases of faces), face verification
(verification of identity of a single individual), electronic look control (software activates
an electronic door lock mechanism via printer port and simple interface), and
ATM/secure access/computer security applications
2.1.6. FaceIt PcThis software is developed at Visionics Corporation. Information on the companys
biometric recognition system (a software engine that automatically detects, locates and
identifies human faces from live video or static images) is provided; company focuses on
building applications for banking, Intranet security, surveillance, time & attendance, law
enforcement, national IDs, application security, multimedia search, passport control,
service kiosks, entitlement programs, voter registration, enterprise security, database
access, teleconferencing, airport/luggage security, border control, consumer research, and
vehicular sentry.
5/24/2018 System Requirement Specification
26/87
Chapter 2 Literature Survey & Basic Concepts
BCIS 12
2.2. Techniques for Face RecognitionFollowing are the techniques used for the purpose of Face Recognition system.
2.2.1. Geometric FeaturesMany people have explored geometric feature based methods for face recognition.
Kanade presented an automatic feature extraction method based on ratios of distances and
reported a recognition rate of 45-75% with data base of 20 people. Burnelli and Paggio
compute a set of geometrical features such as nose width and length, mouth position, and
chin shape. They report a 90% recognition rate on a database of 47 people. However they
show that a simple template matching scheme provides 100% recognition for the same
database. Cox et al. have recently introduced a mixture-distance technique which
achieves a recognition rate of 95% using a query database of 95 images from a total of
685 images. Each face is represented by 30 manually extracted distances.
Systems, which employee precisely measured distances between features, may be most
useful for finding possible matches in a large mug-shot database. For other applications
automatic identification of these points would be required, and the resulting system would
be dependent on the accuracy of the feature location algorithm.
2.2.2. Eigen FacesEigenfacesis the name given to a set ofeigenvectors when they are used in thecomputer
vision problem of human face recognition. The approach of using eigenfaces for
recognition was developed by Sirovich and Kirby (1987)and used by Matthew Turk
and Alex Pentland in face classification. The eigenvectors are derived from the
covariance matrix of theprobability distribution over the high-dimensionalvector space
of face images. The eigenfaces themselves form a basis set of all images used to construct
the covariance matrix. This produces dimension reduction by allowing the smaller set of
basis images to represent the original training images. Classification can be achieved by
comparing how faces are represented by the basis set.
Turk and Pentland present results on a database of 16 subjects with various head
orientation, scaling and lighting. Their images appear identical otherwise with little
variation in facial expression, facial details, pose etc. For lighting, orientation, and scale
variation their system achieves 96%, 85% and 64% correct classification respectively.
Scale is normalized to the eigenface size based on an estimate of their head size. The
http://en.wikipedia.org/wiki/Eigenvectorhttp://en.wikipedia.org/wiki/Computer_visionhttp://en.wikipedia.org/wiki/Computer_visionhttp://en.wikipedia.org/wiki/Facial_recognition_systemhttp://en.wikipedia.org/wiki/Facial_recognition_systemhttp://en.wikipedia.org/w/index.php?title=Matthew_Turk&action=edit&redlink=1http://en.wikipedia.org/w/index.php?title=Matthew_Turk&action=edit&redlink=1http://en.wikipedia.org/wiki/Alex_Pentlandhttp://en.wikipedia.org/wiki/Alex_Pentlandhttp://en.wikipedia.org/wiki/Eigenvectorshttp://en.wikipedia.org/wiki/Covariance_matrixhttp://en.wikipedia.org/wiki/Probability_distributionhttp://en.wikipedia.org/wiki/Dimensionhttp://en.wikipedia.org/wiki/Vector_spacehttp://en.wikipedia.org/wiki/Vector_spacehttp://en.wikipedia.org/wiki/Dimensionhttp://en.wikipedia.org/wiki/Probability_distributionhttp://en.wikipedia.org/wiki/Covariance_matrixhttp://en.wikipedia.org/wiki/Eigenvectorshttp://en.wikipedia.org/wiki/Alex_Pentlandhttp://en.wikipedia.org/w/index.php?title=Matthew_Turk&action=edit&redlink=1http://en.wikipedia.org/wiki/Facial_recognition_systemhttp://en.wikipedia.org/wiki/Facial_recognition_systemhttp://en.wikipedia.org/wiki/Computer_visionhttp://en.wikipedia.org/wiki/Computer_visionhttp://en.wikipedia.org/wiki/Eigenvector5/24/2018 System Requirement Specification
27/87
Chapter 2 Literature Survey & Basic Concepts
BCIS 13
middle of the faces is accentuated, reducing any negative affect of changing hairstyle and
backgrounds.
2.2.3. Graph MatchingAn approach to face recognition is well-known method of Graph matching. Lades et al.
present dynamic link Architecture for distortion invariant object recognition, which
employs elastic graph matching to find the closest stored graph. Objects are presented
with spare graphs whose vertices are labeled with a multi-resolution description in terms
of a local power spectrum, and whose edges are labeled with geometric distances. They
present good results with a database of 87 people and test images composed of different
expressions and faces turned 15 degrees. The matching process is computationally
expensive, taking roughly 25 seconds to compare an image of 87 stored objects when
using a parallel machine with 23 transputers. Wishok et al. use an updated version of the
technique and compare 300 faces against 300 different faces of same people taken from
the FERET database. They report a recognition rate of 97.3%.
2.2.4. Fisher Faces (Linear Discriminate Analysis algorithm)Linear Discriminate Analysis algorithm is also used in face recognition. Originally
developed in 1936 by R.A. Fisher, Discriminant Analysis is a classic method of
classification that has stood the test of time. Discriminant analysis often produces models
whose accuracy approaches (and occasionally exceeds) more complex modern methods.
LDA is similar to Principal Component Analysis (PCA). In both PCA and LDA we look
for the linear combinations of the variables which explain the given data well. LDA
explicitly attempts to model the difference between the classes of data. PCA on the other
hand does not take into account any difference in class but builds the feature
combinations based on differences rather than similarities.
In a computer based face recognition system, each face is represented by a large number
of pixel values. Linear discriminant analysis is primarily used here to reduce the number
of features to a more manageable number before classification. Each of the new
dimensions is a linear combination of pixel values, which form a template. The linear
combinations obtained using Fisher's linear discriminant are called Fisher Faces.
5/24/2018 System Requirement Specification
28/87
Chapter 2 Literature Survey & Basic Concepts
BCIS 14
2.2.5. Template MatchingA simple version of template matching is that a test image represented as a two
dimensional array of intensity values is compared using a suitable metric, such as the
Euclidean distance, with a single template representing the whole face. There are severalother more sophisticated versions of template matching on face recognition. One can use
more than one face template from different viewpoints to represent an individual's face. A
face from a single viewpoint can also be represented by a set of multiple distinctive
smaller templates. The face image of gray levels may also be properly processed before
matching. In, Bruneli and Poggio automatically selected a set of four features templates,
i.e., the eyes, nose, mouth, and the whole face, for all of the available faces. They
compared the performance of their geometrical matching algorithm and template
matching algorithm on the same database of faces, which contains 188 images of 47
individuals.
The template matching was superior in recognition (100 percent recognition rate) to
geometrical matching (90 percent recognition rate) and was also simpler. Since the
principal components (also known as eigenfaces or eigenfeatures) are linear combinations
of the templates in the data basis, the technique cannot achieve better results than
correlation, but it may be less computationally expensive. One drawback of template
matching is its computational complexity. Another problem lies in the description of
these templates. Since the recognition system has to be tolerant to certain discrepancies
between the template and the test image, this tolerance might average out the differences
that make individual faces unique. In general, template-based approaches compared to
feature matching are a more logical approach.
2.2.6. Neural Network ApproachThe attractiveness of using neural network could be due to its non-linearity in the
network. Hence, the feature extraction step may be more efficient than the linear
Karhunen-LoeAve methods. One of the first artificial neural network (ANN) techniques
used for face recognition is a single layer adaptive network called WISARD which
contains a separate network for each stored individual. The way of constructing a neural
network structure is crucial for successful recognition. It is very much dependent on the
intended application.
5/24/2018 System Requirement Specification
29/87
Chapter 2 Literature Survey & Basic Concepts
BCIS 15
For face detection, multilayer perceptron and convolutional neural network havebeen applied.
For face verification, Cresceptron multi resolution pyramid structure is used.Lawrence et al. proposed a hybrid neural network, which combined local image sampling,
a self-organizing map (SOM) neural network, and a convolutional neural network. The
SOM provides a quantization of the image samples into a topological space where inputs
that are nearby in the original space are also nearby in the output space, thereby providing
dimension reduction and invariance to minor changes in the image sample. The
convolutional network extracts successively larger features in a hierarchical set of layers
and provides partial invariance to translation, rotation, scale, and deformation. The
authors reported 96.2 percent correct recognition on ORL database of 400 images of 40
individuals. The classification time is less than 0.5 second, but the training time is as long
as 4 hours. Lin et al used probabilistic decision-based neural network (PDBNN), which
inherited the modular structure from its predecessor, a decision based neural network
(DBNN).
The PDBNN can be applied effectively to
1) Face detector:This finds the location of a human face in a cluttered image,
2) Eye localizer:This determines the positions of both eyes in order to generatemeaningful feature vectors, and
3) Face recognizer:A hierarchical neural network structure with non-linear basisfunctions and a competitive credit-assignment scheme.
PDBNN-based biometric identification system has the merits of both neural networks and
statistical approaches, and its distributed computing principle is relatively easy to
implement on parallel computer. In, it was reported that PDBNN face recognizer had the
capability of recognizing up to 200 people and could achieve up to 96 percent correct
recognition rate in approximately 1 second. However, when the number of persons
increases, the computing expense will become more demanding. In general, neural
network approaches encounter problems when the number of classes (i.e., individuals)
increases. Moreover, they are not suitable for a single model image recognition task
because multiple model images per person are necessary in order for training the systems
to optimal parameter setting.
5/24/2018 System Requirement Specification
30/87
Chapter 2 Literature Survey & Basic Concepts
BCIS 16
2.2.7. Stochastic ModelingStochastic modeling of non-stationary vector time series based on hidden Markov models
(HMM) has been very successful for speech applications. Samaria and Fallside applied
this method to human face recognition. Faces were intuitively divided into regions suchas the eyes, nose, mouth, etc., which can be associated with the states of a hidden Markov
model. Since HMMs require a one-dimensional observation sequence and images are two
dimensional, the images should be converted into either 1D temporal sequence or 1D
spatial sequence. In, a spatial observation sequence was extracted from a face image by
using a band sampling technique. Each face image was represented by a 1D vector series
of pixel observation. Each observation vector is a block of L lines and there is an M lines
overlap between successive observations. An unknown test image is first sampled to an
observation sequence. Then, it is matched against every HMMs in the model face
database (each HMM represents a different subject). The match with the highest
likelihood is considered the best match and the relevant model reveals the identity of the
test face. The recognition rate of HMM approach is 87 percent using ORL database
consisting of 400 images of 40 individuals. Pseudo 2D HMM was reported to achieve a
95 percent recognition rate in their preliminary experiments. Its classification time and
training time were not given (believed to be very expensive). The choice of parameters
had been based on subjective intuition.
2.2.8. N-tuple classifiersConventional n-tuple systems have the desirable features of super-fast single-pass
training, super-fast recognition, conceptual simplicity, straightforward hardware and
software implementations and accuracy that is often competitive with other more
complex, slower methods. Due to their attractive features, n-tuple methods have been the
subject of much research. In conventional n-tuple based image recognition systems, thelocations specified by each n-tuple are used to identify an address in a look-up-table. The
contents of this address either use a single bit to indicate whether or not this address was
accessed during training, or store a count of how many times that address occurred.
While the traditional n-tuple classifier deals with binary-valued input vectors, methods
using n-tuple systems with integer-valued inputs have also been developed. Allinson and
Kolcz have developed a method of mapping scalar attributes into bit strings based on a
combination of CMAC and Gray coding methods. This method has the property that for
5/24/2018 System Requirement Specification
31/87
Chapter 2 Literature Survey & Basic Concepts
BCIS 17
small differences in the arithmetic values of the attributes, the hamming distance between
the bit strings is equal to the arithmetic difference. For larger values of the arithmetic
distance, the hamming distance is guaranteed to be above a certain threshold.
The continuous n-tuple method also shares some similarity at the architectural level with
the single layer look-up perceptron of Tattersall et al, though they differ in the way the
class outputs are calculated, and in the training methods used to configure the contents of
the look-up tables (RAMS).
In summary, no existing technique is free from limitations. Further efforts are required to
improve the performances of face recognition techniques, especially in the wide range of
environments encountered in real world.
5/24/2018 System Requirement Specification
32/87
Chapter 2 Literature Survey & Basic Concepts
BCIS 18
2.3. Basic ConceptsProject has been divided into following modules/functions.
2.3.1. PreprocessingAll pictures from the face database (included the subject to be tested) are preprocessed
because the system uses intensity values of the images to train itself and test an image.
The preprocessing is performed in following steps:
2.3.1.1. Gray Scale levelsConverts pictures to 256 gray scale levels because the algorithm operates on intensity
values.
2.3.1.2. Geometric normalizationTaking into account the face annotations using OpenCV Object Detection algorithm for
face, the method resize, scale, rotate and reflect the picture if needed in order to make it
centered and symmetric with relation to the face.
2.3.2. Object DetectionObject detection (face detection particularly in our system) is a basic requirement of face
recognition systems. There are some techniques that extract certain features of the object
of interest from the image and then use heuristics to find combination of those features in
the image. It is hard to extract such features because sometimes object in the image is not
properly placed or there is shadow on half of the object or a half of the object is not
visible due to lighting conditions. Statistical model-based techniques are quite fair.
Statistical model is first trained by providing multiple positive samples and negative
samples and then used to detect objects.
Face detection is a computer technology that determines the locations and sizes of human
faces in digital images. Face tracking is an extension of face-detection when applied to a
video image. Some of the face-detection techniques are Feature-based methods, using
skin color to find face segments, detecting a blinking eye pattern in a video image and
template-based matching. In Feature based technique, the content of a given region of an
image is transformed into features, after which a classifier trained on example faces
decides whether that particular region of the image is a face, or not.
Intels OpenCV (Open Source Computer Vision) library uses such a statistical model to
detect objects. OpenCV provides low-level and high level APIs for face/object detection.
5/24/2018 System Requirement Specification
33/87
Chapter 2 Literature Survey & Basic Concepts
BCIS 19
A low level API allows users to check an individual location within the image by using
the classifier cascade to find whether it contains a face or not. Helper functions calculate
integral images and scale the cascade to a different face size (by scaling the coordinates
of all rectangles of Haar-like features) etc. Alternatively, the higher-level function if
cvDetectObjects does this all. Figure shows the algorithms results
.
2.3.3. Face RecognitionFollowing is the description of the technique used to recognize faces.
2.3.3.1. IntroductionWe used EigenFaces for face recognition. This approach was first developed by Sirovich
and Kirby (1987) and used later by Matthew Turk and Alex Pentland.
A dataset, such as a digital image, consists of a large number of inter-related variables.
Using Principal Component Analysis, the dimensionality of the dataset is reduced while
retaining as much as variation in the dataset as possible. The datasets are transformed to a
new set of uncorrelated variables called the principal components. These principal
components are ordered in such a way that the first few retain most of the variation
present in all of the original variables.
PCA method is applied in face recognition by discriminating the image data into several
classes. There will be a lot of noise in the image caused by differing lighting conditions,
pose and so on. Despite these noises, there are patterns that can be observed in the image
such as the presence of eyes, mouth or nose in a face and the relative distances between
these objects. PCAs aim is to extract these patterns or the features from the image.
5/24/2018 System Requirement Specification
34/87
Chapter 2 Literature Survey & Basic Concepts
BCIS 20
In the domain of face recognition, the principal components (features) are referred to as
Eigen faces. The original image can be reconstructed by summing up all the Eigen faces
in the right proportion by adding more weights to the real features of the face. Certain
Eigen faces that do not contribute to the important face features are omitted. This is
necessary because of performance issues while doing large computations. This idea is
applied in our approach where Eigen faces are prepared from a set of training images and
stored first. Then Eigen face is prepared for the input test image and compare with the
training images. The matching image is the image having similar weights in the test
database.
2.3.3.2. ConceptFace recognition using Eigen faces approach was initially developed by Sirovich and
Kirby and later used by Matthew Turk and Alex Pentland. They showed that a collection
of face images can be approximately reconstructed by storing a small collection of
weights for each face and a small set of standard pictures.
Using Principal Component Analysis on a set of human face images, a set of Eigen faces
can be generated. Eigen faces are a set of eigenvectors used mostly in human face
recognition. Eigenvectors are a set of features that characterize the variation between
face images. These eigenvectors are derived from the covariance matrix of the probability
distribution of the high-dimensional vector space of faces of human beings. The main
idea here is to use only the best Eigen faces that account for the major variance within the
set of face images. By using lesser Eigen faces, computational efficiency and speed is
achieved. The Eigen faces are the basis vectors of the Eigen face decomposition.
Below are the steps of face recognition process:
A training set of same resolution digital images is initially prepared. The images are stored as a matrix with each row corresponding to an image. Each image is represented as a vector with (r X c) elements where r and c are
the number of rows and the number of columns respectively.
An average image is calculated from the individual training set images. For each image, the deviation from the average image is then calculated and
stored.
The Eigen vectors and Eigen values are then calculated. These represent thedirections in which the training set images differ from the average image.
5/24/2018 System Requirement Specification
35/87
Chapter 2 Literature Survey & Basic Concepts
BCIS 21
A new image is then subtracted from the average image and projected into Eigenface space.
This is compared with the projection vectors of training faces and the matchingimage is determined.
A face image is represented by a two dimensional N by N array of intensity values or a
vector of dimension N2. If there is an image of size 128 by 128, then that can be said as a
vector of dimension 16384. Or, this is equivalent to one point in a 16384-dimensional
space. A group of images then maps to a collection of points in this image space. The
training images chosen are all of same dimensions. We need to find the vectors that best
represent the distribution of face images within this image space and these vectors that
define the sub-space of face images are termed as face space. Each vector of length N2
represents an image of dimension N by N and is a linear combination of the original face
images. These vectors are termed as Eigen faces because these are the vectors of the
covariance matrix corresponding to the original face images and they have face-like
appearance.
Flow chart of the Algorithm is shown in Figure.2.1.
5/24/2018 System Requirement Specification
36/87
Chapter 2 Literature Survey & Basic Concepts
BCIS 22
Start training set image
preparation with the same
resolution images
Store image as a matrix represented
as a vector with (r X c) elements
Calculate average image from training
images
Calculate and store deviation of each
image from the average image
Calculate Eigenvectors and Eigenvalues
of the covariance matrix
Calculate covariance matrix
Store the
principalcomponents
End training set
preparation
Test
Input
Image
Subract from the
average image
Project into
Eigenface space
Compare
with
training
vectors
Determine the minimum
Euclidean distance which is
the matching image
End face
recognition
Training set preparation Face recognition stage
Figure 2.1: Flowchart of Face Recognition using Eigen Faces
2.3.3.3. Principal Component Analysis (PCA)It is a way of identifying patterns in data, and expressing the data in such a way as to
highlight their similarities and differences. Since patterns in data can be hard to find in
data of high dimension, where the luxury of graphical representation is not available,
PCA is a powerful tool for analyzing data. The other main advantage of PCA is that once
you have found these patterns in the data, and you compress the data, i.e. by reducing the
number of dimensions, without much loss of information.
5/24/2018 System Requirement Specification
37/87
Chapter 2 Literature Survey & Basic Concepts
BCIS 23
PCA is technic widely used with regard to face recognition. The principle on which it
works is that face pictures have common objects (nose, eyes, mouth, etc). Everypicture
from the face database is treated as pixel vector. The algorithm computes the covariance
matrix with all these vectors and extracts the Eigen vectors and corresponding Eigen
values. Figure illustrates the Eigen value spectrum of face data. PCA algorithm retains
only a few eigenvectors to form a new basis (Eigen Basis).
PCA Training
The training is the common training performed with particular settings. This is the part of
the process and computes the covariance matrix and the Eigen basis. Following are the
PCA steps:
is anN2x1 vector, corresponding to anNxN face imagesI. The idea is to represent (= - mean face) into a low-dimensional space:
mean w1u1 w2u2 . . . wK uK (K
5/24/2018 System Requirement Specification
38/87
Chapter 2 Literature Survey & Basic Concepts
BCIS 24
Step 4: subtract the mean face:
i i
Step 5: compute the covariance matrix C:
Step 6: compute the eigen vectors i of AAT
The matrixAAT is very large ----> not practical!!
Step 6.1: consider the matrixATA(MxMmatrix)
Step 6.2: compute the eigen vectors viof ATA
ATAvi= ivi
What is the relationship between usiand vi?
Thus,AAT andATAhave the same eigenvalues and their eigen vectors are related
as follows: i=Avi
Note 1: AAT can have up toN2eigenvalues and eigenvectors.
5/24/2018 System Requirement Specification
39/87
Chapter 2 Literature Survey & Basic Concepts
BCIS 25
Note 2: ATA can have up toM eigenvalues and eigenvectors.
Note 3: TheM eigenvalues ofATA (along with their corresponding
eigenvectors) correspond to theM largest eigenvalues ofAAT(along with
their corresponding eigenvectors).
Step 6.3: compute theMbest eigenvectors ofAAT: i=Avi
(important:normalize isuch that||i||=1)
Step 7: Keep onlyK eigenvectors (corresponding to theKlargest eigenvalues)
Representing faces into the basis
Each face (minus the mean) in the training set can be represented as a linear
combination of the bestK eigenvectors:
(We call the js Eigen faces)
Each normalized training face iis represented in this basis by a vector:
2.3.3.4. Face Recognition Using EigenfacesGiven an unknown face image (centered and of the same size like the training faces)
follow these steps:
Step 1: normalize :
5/24/2018 System Requirement Specification
40/87
Chapter 2 Literature Survey & Basic Concepts
BCIS 26
Step 2: project on the eigenspace
Step 3: represent as:
Step 4: find er minl||l||
Step 5: if er < Tr, then is recognized as face l from the training set.
The distance eris called distance within the face space (difs)
Comment: we can use the common Euclidean distance to compute er, however, it has
been reported that theMahalanobis distanceperforms better:
(variations along all axes are treated as equally significant)
2.3.3.5. Face Detection Using EigenfacesGiven an unknown image
Step 1: compute
Step 2: compute
5/24/2018 System Requirement Specification
41/87
Chapter 2 Literature Survey & Basic Concepts
BCIS 27
Step 3: compute ed || ||
Step 4: if ed < Td, then is a face.
The distance ed is called distance from face space (dffs)
5/24/2018 System Requirement Specification
42/87
Chapter 3 System Analysis
BCIS 28
CHAPTER 3
System Analysis
5/24/2018 System Requirement Specification
43/87
Chapter 3 System Analysis
BCIS 29
3. AnalysisRequirements are capabilities and conditions to which the system and more broadly the
project must conform. A prime challenge of requirements work is to find, communicate and
remember (that usually means record) what is really needed, in a form that clearly speaks to
client and development team members.
3.1. Use Cases Model3.1.1. Use Case DiagramA use case is a sequence of actions that provide a measurable value to an actor. Another way
to look at it is a use case describes a way in which a real-world actor interacts with the
system. In a system use case you include high-level implementation decisions. System use
cases can be written in both an informal manner and a formal manner. Use case Diagrams are
created to visualize the relationship between actors and use cases. A use case diagram is also
used to capture the system functionality as seen by the user.
Figure 3.1: Use Case Diagram
5/24/2018 System Requirement Specification
44/87
Chapter 3 System Analysis
BCIS 30
3.1.2. Use Cases Descriptions in BriefInsoftware engineering,a use case diagram in theUnified Modeling Language (UML) is a
type of behavioral diagram defined by and created from aUse-case analysis.Its purpose is to
present a graphical overview of the functionality provided by a system in terms ofactors,their goals (represented asuse cases), and any dependencies between those use cases.
The main purpose of a use case diagram is to show what system functions are performed for
which actor. Roles of the actors in the system can be depicted.
Use Case diagrams are formally included in two modeling languages defined by theOMG.
Both theUML andSysML standards define a graphical notation for modelinguse cases with
diagrams. One complaint about the standards has been that they do not define a format for
describing these use cases. Generally, both graphical notation and descriptions are important
as they document the use case, showing the purpose for which anactor uses a system.
The use case diagram shows the position or context of the use case among other use cases. As
an organizing mechanism, a set of consistent, coherent use cases promotes a useful picture of
system behavior, a common understanding between the customer/owner/user and the
development team.
http://en.wikipedia.org/wiki/Software_engineeringhttp://en.wikipedia.org/wiki/Unified_Modeling_Languagehttp://en.wikipedia.org/wiki/Use-case_analysishttp://en.wikipedia.org/wiki/Actor_%28UML%29http://en.wikipedia.org/wiki/Use_casehttp://en.wikipedia.org/wiki/Object_Management_Grouphttp://en.wikipedia.org/wiki/Unified_Modeling_Languagehttp://en.wikipedia.org/wiki/Systems_Modeling_Languagehttp://en.wikipedia.org/wiki/Use_caseshttp://en.wikipedia.org/wiki/Actor_%28UML%29http://en.wikipedia.org/wiki/Actor_%28UML%29http://en.wikipedia.org/wiki/Use_caseshttp://en.wikipedia.org/wiki/Systems_Modeling_Languagehttp://en.wikipedia.org/wiki/Unified_Modeling_Languagehttp://en.wikipedia.org/wiki/Object_Management_Grouphttp://en.wikipedia.org/wiki/Use_casehttp://en.wikipedia.org/wiki/Actor_%28UML%29http://en.wikipedia.org/wiki/Use-case_analysishttp://en.wikipedia.org/wiki/Unified_Modeling_Languagehttp://en.wikipedia.org/wiki/Software_engineering5/24/2018 System Requirement Specification
45/87
Chapter 3 System Analysis
BCIS 31
3.1.3. Use Case Description in DetailFollowing are the cases derived during Analysis.
3.1.3.1. Use Case UC-01: EnrollmentUse Case ID UC-01
Name Enrollment
Primary Actor Operator / Administrator
Secondary Actor User
Stakeholders and
Interests
Operator: Wants accurate and proper face samples of the person i.e.
user.
User: Wants his enrollment into the system.
Preconditions Operator initiated the system. Also camera is attached to thesystem.
Post Conditions Persons face is detected, his accurate face samples captured and
stored and his personal information is also entered.
Main Success
Scenario
1. Operator enters users personal information.2. User sits in front of the camera.3. User is required to stay still and look directly towards the
camera.
4. System detects users face in the snapshot of the user.5. Save the face sample if the face is detected accurately.6. Discard if face is not detected accurately. Repeat steps 4-6
times at least 4 times.
7. Users face samples are stored on the disk.Scenario Extensions 1a.At any time, system fails:
1. Operator restarts System.2a.Camera is attached but it doesntdisplay the video.
1. Make sure device driver is installed2. Make sure any other application is not using the
camera.
3a.Face is not going to be detected.1. Lighting conditions should be sharp2. User is required to stay still and towards the camera.3. There are not two faces in front of the camera.
5/24/2018 System Requirement Specification
46/87
Chapter 3 System Analysis
BCIS 32
3.1.3.2. Use Case UC-02: PreprocessingUse Case ID UC-02
Name Preprocessing
Primary Actor Operator/ Administrator
Secondary Actor User
Stakeholders and
Interests
Operator: face samples of all the enrolled users are preprocessed for
training of face recognition.
Preconditions Operator initiated the system. The face database already contains
the images of all the employees.
Post Conditions All the face samples enrolled are converted to gray scale, aligned
and scaled to be standard size.
Main Success
Scenario
1. System reads the image from the face database.2. Convert it to gray scale.3. Aligns with respect to face.4. Scales to a standard size.5. The image is cropped using the mask or cropped function
such that only the face from fore head to chin and cheek isvisible.
Repeat steps 4-6 for all the images in face data base
6. All the images are preprocessed.Scenario Extensions 1a.At any time, system fails:
To support recovery and ensure that database has not been
damaged.
1. Operator restarts System.2a.Preprocessed images are not accurate.
1. Lighting conditions should be sharp2. Enroll the user again3. User is required to stay still and look directly
towards the camera
4. There are not two faces in front of the camera.
5/24/2018 System Requirement Specification
47/87
Chapter 3 System Analysis
BCIS 33
3.1.3.3. Use Case UC-03: TrainingUse Case ID UC-03
Name Training
Primary Actor Operator/ Administrator
Secondary Actor User
Stakeholders and
Interests
Operator: Wants accurate and proper training of the system.
Preconditions Operator initiated the system. The face database already contains
the preprocessed images of all the employees.
Post Conditions All the calculations are performed. PCA is applied to the images,
eigenvectors and eigenvalues are calculated and system is ready for
recognition.
Main Success
Scenario
1. Different images are calculated.2. Preprocessing is applied.3. PCA is applied on the training images.4. Eigenvectors and eigenvalues are generated.5. Euclidean distance is applied.6. System is ready for recognition.
Scenario Extensions 1a.At any time, system fails:To support recovery and ensure that database has not been
damaged.
1.
Operator restarts System.2a.A user is deleted from the database.
1. Delete his/her face samples from the database aswell.
2. Restart training.
5/24/2018 System Requirement Specification
48/87
Chapter 3 System Analysis
BCIS 34
3.1.3.4. Use Case UC-04: TestingUse Case ID UC-04
Name Testing
Primary Actor Operator/ Administrator
Secondary Actor User
Stakeholders and
Interests
Operator: Wants accurate and proper training of the system.
User: Wants his recognition by a system.
Preconditions Training is performed, eigenvectors and eigenvalues are generated.
Camera is attached with the computer.
Post Conditions User is recognized and his personal information is returned. If notrecognized, proper message is displayed.
Main Success
Scenario
1. User sits in front of the camera.2. User is required to stay still and look directly towards the
camera.
3. System detects users face inthe snapshot of the user.4. System preprocesses the face sample.5. System starts matching the face sample with the enrolled
ones.
6. If matched, users personal information is returned.7. If not matched, not found message is displayed.8. May repeat steps 1-7 until the user is recognized.
Scenario Extensions 1a.At any time, system fails:To support recovery and ensure that database has not been
damaged.
1. Operator restarts System.2a.A user is deleted from the database.
1. Delete his face samples from the database as well.2.
Restart training.
3a.Camera is attached but it doesnt display the video.1. Make sure device driver is installed.2. Make sure any other application is not using the
camera.
4a.Face is not going to be detected.1. Lighting conditions should be sharp2. User is required to stay still and look directly
towards the camera.
3. There are not two faces in front of the camera.
5/24/2018 System Requirement Specification
49/87
Chapter 3 System Analysis
BCIS 35
3.1.4. System Sequence DiagramsA sequence diagram maps a scenario described by a use case in step by step detail to define
how objects collaborate to achieve your applications goals.
A lifeline in a sequence diagram represents an object and shows all its points of interaction
with other objects in events that are important to it. Lifelines start at the top of a sequence
diagram and descend vertically to indicate the passage of time. Interactions between objects,
messages and replies, are drawn as horizontal direction arrows connecting lifelines. In
addition, boxes known as combine fragments are drawn around sets of arrows to mark
alternative actions, loops, and other control structures.
UML sequence diagrams are typically used to describe object-oriented software systems, they
are also extremely useful as system engineering tools to design system architectures, in
business process engineering as process flow diagrams, as message sequence charts and call
flows for telecom/wireless system design, and for protocol stack design and analysis. The
following are the sequence diagram elements;
Actor
Represents an external person or entity that interacts with the system
Top Package::Actor
Object
Represents an object in the system or one of its components
object
Separator
5/24/2018 System Requirement Specification
50/87
Chapter 3 System Analysis
BCIS 36
Represents an interface or boundary between subsystems, components or units (e.g., air
interface, Internet, network).
Call Message
A call (procedure) message between header elements
Call Message
Return Message
A return message between header elements
Return Message
Free Note
Documentation note that is free-flowing and can be placed anywhere in the diagram
{}
Following are the system sequence diagram of the corresponding use cases.
5/24/2018 System Requirement Specification
51/87
Chapter 3 System Analysis
BCIS 37
3.1.4.1. SSD1: Enrollment
Figure 3.2: SSD1 Enrollment
3.1.4.2. SSD2: Preprocessing
Figure 3.3: SSD2 Preprocessing
5/24/2018 System Requirement Specification
52/87
Chapter 3 System Analysis
BCIS 38
3.1.4.3. SSD3: Training
Figure 3.4: SSD3 Training
3.1.4.4. SSD4: Testing
Figure 3.5: SSD4 Testing
5/24/2018 System Requirement Specification
53/87
Chapter 3 System Analysis
BCIS 39
3.2. Domain / Conceptual ModelA Domain Model is a high-level conceptual model, defining physical and abstract objects in
an area of interest to the Project. It can be used to document relationships between and
responsibilities of conceptual classes (that is, classes that represent the concept of a group ofthings rather than Classes that define a programming object). It is also useful for defining the
terms of a domain.
Figure 3.6: Domain Model
5/24/2018 System Requirement Specification
54/87
Chapter 3 System Analysis
BCIS 40
3.3. Activity DiagramAn activity diagram is a simple and intuitive illustration of what happens in a workflow, what
activities can be done in parallel, and whether there are alternative paths through the
workflow. Activity diagrams as defined in the Unified Modeling Language are derived fromvarious techniques to visually illustrate workflows.
In the Unified Modeling Language, activity diagrams can be used to describe the business
and operational step-by-step workflows of components in a system. n SysML the activity
diagram has been extended to indicate flows among steps that convey physical element (e.g.,
gasoline) or energy (e.g., torque, pressure).
3.3.1. Admin Side Activity Diagram
Figure 3.7: Admin Side Activity Diagram
http://en.wikipedia.org/wiki/Unified_Modeling_Languagehttp://en.wikipedia.org/wiki/Workflowhttp://en.wikipedia.org/wiki/SysMLhttp://en.wikipedia.org/wiki/SysMLhttp://en.wikipedia.org/wiki/Workflowhttp://en.wikipedia.org/wiki/Unified_Modeling_Language5/24/2018 System Requirement Specification
55/87
Chapter 3 System Analysis
BCIS 41
3.3.2. User Side Activity Diagram
Figure 3.8: User Side Activity Diagram
5/24/2018 System Requirement Specification
56/87
Chapter 4 System Design
BCIS 42
CHAPTER 4
System Design
5/24/2018 System Requirement Specification
57/87
Chapter 4 System Design
BCIS 43
4. System DesignSoftware design sits at the technical kernel of the software engineering process and is
applied regardless of the development paradigm and area of application. Design is the
first step in the development phase for any engineered product or system. The designers
goal is to produce a model or representation of an entity that will later be built.
Beginning, once system requirement have been specified and analyzed, system design is
the first of the three technical activities -design, code and test that is required to build and
verify software.
The importance can be stated with a single word Quality. Design is the place where
quality is fostered in software development. Design provides us with representations of
software that can assess for quality. Design is the only way that we can accurately
translate a customers view into a finished software product or system. Software design
serves as a foundation for all the software engineering steps that follow. Without a strong
design we risk building an unstable systemone that will be difficult to test, one whose
quality cannot be assessed until the last stage.
During design, progressive refinement of data structure, program structure, and
procedural details are developed reviewed and documented. System design can be viewedfrom either technical or project management perspective. From the technical point of
view, design is comprised of four activities architectural design, data structure design,
interface design and procedural design. A good design must contain the following
characteristics;
In design phase of the software, alternative approaches must be considered basedon the requirements
Design should be traceable to the analysis model Design should exhibit uniformity and integration Design should be structured to accommodate any change Design should be reviewed to minimize errors Ensure the accurate translation of the requirements. It must be readable and understandable. It should provide complete picture of the software. Design forms basis for programming and maintenance
5/24/2018 System Requirement Specification
58/87
Chapter 4 System Design
BCIS 44
The Designing of system is an important stage during system development. The following
components are including in system design phase that are related to this project.
4.1. Class DiagramClass or structural diagrams define the basic building blocks of a model. They are used
for static object modeling, describing what attributes and behavior a class has.
Figure 4.1: Class Diagram
5/24/2018 System Requirement Specification
59/87
Chapter 4 System Design
BCIS 45
4.2. Entity Relationship DiagramAn entity-relationship model is an abstract and conceptual representation of data. Entity-
relationship modeling is a database modeling method, used to produce a type of
conceptual schema or semantic data model of a system, often a relational database, and itsrequirements in a top-down fashion. Diagrams created by this process are called Entity-
Relationship Diagrams
Figure 4.2: Entity Relationship Diagram
5/24/2018 System Requirement Specification
60/87
Chapter 4 System Design
BCIS 46
4.3. Sequence DiagramsA sequence diagram is a kind of interaction diagram that shows how processes operate
with one another and in what order. It is a construct of a Message Sequence Chart. A
sequence diagram shows object interactions arranged in time sequence. It depicts theobjects and classes involved in the scenario and the sequence of messages exchanged
between the objects needed to carry out the functionality of the scenario. Sequence
diagrams are typically associated with use case realizations in the Logical View of the
system under development. Sequence diagrams are sometimes called event diagrams,
event scenarios
A sequence diagram shows, as parallel vertical lines (lifelines), different processes or
objects that live simultaneously, and, as horizontal arrows, the messages exchanged
between them, in the order in which they occur. This allows the specification of simple
runtime scenarios in a graphical manner.
UML sequence diagrams are typically used to describe object-oriented software systems,
they are also extremely useful as system engineering tools to design system architectures,
in business process engineering as process flow diagrams, as message sequence charts
and call flows for telecom/wireless system design, and for protocol stack design and
analysis. The following are the sequence diagram elements;
5/24/2018 System Requirement Specification
61/87
Chapter 4 System Design
BCIS 47
4.3.1. SD1: Enrollment
Figure 4.3: Sequence Diagram of Enrollment
4.3.2. SD2: Preprocessing
Figure 4.4: Sequence Diagram of Preprocessing
5/24/2018 System Requirement Specification
62/87
Chapter 4 System Design
BCIS 48
4.3.3. SD3: Training
Figure 4.5: Sequence Diagram of Training
4.3.4. SD4: Testing
Figure 4.6: Sequence Diagram of Testing
5/24/2018 System Requirement Specification
63/87
Chapter 5 Implementation
BCIS 49
CHAPTER 5
Implementation
5/24/2018 System Requirement Specification
64/87
Chapter 5 Implementation
BCIS 50
5. ImplementationThis chapter is about the overview of the technologies used to develop this project.
Implementation is the important phase of software development life cycle where the
thoughts and ideas are given a physical existence. It is just like making your dreams true.
It is the time when software development progress in a full swing manner. An application
is a result of a successful implementation.
Implementation (software) perspective describes software implementations in a particular
technology (such as Java). In the UP, Implementation means programming and building
the system, not deploying it.
During implementation, developers translate the object model in to source code. Thisincludes implementing the attributes and methods of each object and integrating all the
objects such that they function as a single system.
The implementation activity span the gap between the detailed object design model and a
complete set of source code files that can be compiled together.
There are three types of Implementation:
Implementation of a computer system to replace a manual system. The problemsencountered are converting files, training users, and verifying printouts for
integrity.
Implementation of a new computer system to replace an existing one. This isusually a difficult conversion. If not properly planned there can be many
problems.
Implementation of a modified application to replace an existing one using thesame computer. This type of conversion is relatively easy to handle, provided
there are no major changes in the files.
Implementation in Generic tool project is done in all modules. In the first module User
level identification is done. In this module every user is identified whether they are
genuine one or not to access the database and also generates the session for the user.
Illegal use of any form is strictly avoided.
5/24/2018 System Requirement Specification
65/87
Chapter 5 Implementation
BCIS 51
5.1. T