28
Group Sparse Autoencoder for Extracting Latent Fingerprint Minutia Patterns Reference Paper(Group Sparse Autoencoder) Published in (Image and Vision Computing,2017) Presented By Avantika Singh School of Computing and Electrical Engineering (SCEE) Indian Institute of Technology, Mandi May 9, 2017

Group Sparse Autoencoder for Extracting Latent …faculty.iitmandi.ac.in/~aditya/cs671/cs671_2017/data/...it is quite di cult to de ne a model for extracting minutiae. Figure:Latent

  • Upload
    others

  • View
    9

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Group Sparse Autoencoder for Extracting Latent …faculty.iitmandi.ac.in/~aditya/cs671/cs671_2017/data/...it is quite di cult to de ne a model for extracting minutiae. Figure:Latent

Group Sparse Autoencoder for ExtractingLatent Fingerprint Minutia Patterns

Reference Paper(Group Sparse Autoencoder)Published in (Image and Vision Computing,2017)

Presented ByAvantika Singh

School of Computing and Electrical Engineering (SCEE)Indian Institute of Technology, Mandi

May 9, 2017

Page 2: Group Sparse Autoencoder for Extracting Latent …faculty.iitmandi.ac.in/~aditya/cs671/cs671_2017/data/...it is quite di cult to de ne a model for extracting minutiae. Figure:Latent

Outline

Introduction

Problem Statement

Autoencoders

Regularization

Group Sparse Autoencoders(GSAE)

Majorization and Minimization

Proposed System

Experimental Results

Conclusion

References

Avantika Singh (SCEE, IIT-Mandi) [email protected] May 9, 2017

Page 3: Group Sparse Autoencoder for Extracting Latent …faculty.iitmandi.ac.in/~aditya/cs671/cs671_2017/data/...it is quite di cult to de ne a model for extracting minutiae. Figure:Latent

Introduction

Fingerprints can be broadly classified into two categories.

Patent FingerprintsLatent Fingerprints

Figure: Patent Fingerprint

Avantika Singh (SCEE, IIT-Mandi) [email protected] May 9, 2017

Page 4: Group Sparse Autoencoder for Extracting Latent …faculty.iitmandi.ac.in/~aditya/cs671/cs671_2017/data/...it is quite di cult to de ne a model for extracting minutiae. Figure:Latent

Introduction

A good fingerprint normally consists of 40 to 100 minutia points.

Ridge ending and ridge bifurcation are the two main types of minutias.

Figure: (a)Ridge Ending (b)Ridge Bifurcation

Avantika Singh (SCEE, IIT-Mandi) [email protected] May 9, 2017

Page 5: Group Sparse Autoencoder for Extracting Latent …faculty.iitmandi.ac.in/~aditya/cs671/cs671_2017/data/...it is quite di cult to de ne a model for extracting minutiae. Figure:Latent

Introduction

Figure: Patent Fingerprint Image Patches(Image taken from [1])

Avantika Singh (SCEE, IIT-Mandi) [email protected] May 9, 2017

Page 6: Group Sparse Autoencoder for Extracting Latent …faculty.iitmandi.ac.in/~aditya/cs671/cs671_2017/data/...it is quite di cult to de ne a model for extracting minutiae. Figure:Latent

Problem StatementMain objective is to extract features from latent fingerprint.

Latent fingerprints are the fingerprints that are left on the surface bydeposits of oils or perspiration from the finger.

It is not usually visible to the naked eye but can be detected by usingspecial techniques.

Figure: Latent Fingerprint

Avantika Singh (SCEE, IIT-Mandi) [email protected] May 9, 2017

Page 7: Group Sparse Autoencoder for Extracting Latent …faculty.iitmandi.ac.in/~aditya/cs671/cs671_2017/data/...it is quite di cult to de ne a model for extracting minutiae. Figure:Latent

Problem StatementOur Goal is to extract minutia patterns from latent fingerprint.Latent fingerprint minutia patches lack a definite structure as well asthe of minutia points are less typically between 20 to 30Due to the non-uniform and uncertain variations in latent fingerprints,it is quite difficult to define a model for extracting minutiae.

Figure: Latent Fingerprint Image Patches(Image taken from [1])

Avantika Singh (SCEE, IIT-Mandi) [email protected] May 9, 2017

Page 8: Group Sparse Autoencoder for Extracting Latent …faculty.iitmandi.ac.in/~aditya/cs671/cs671_2017/data/...it is quite di cult to de ne a model for extracting minutiae. Figure:Latent

Autoencoders

An autoencoder is a neural network that tries to reconstruct its input.It mainly has three layers: an input layer, a hidden layer (encodinglayer) and a decoding layer.

The network is trained to reconstruct its input, which forces thehidden layer to learn good representation of the inputs.

It learns a non-linear mapping function between data and its featurespace.

For building an autoencoder just three things are needed. They are:

An Encoding FunctionA Decoding FunctionA distance function between the amount of information loss betweenthe compressed representation of your data and the decompressedrepresentation (i.e loss function).

It is unsupervised in nature.

Avantika Singh (SCEE, IIT-Mandi) [email protected] May 9, 2017

Page 9: Group Sparse Autoencoder for Extracting Latent …faculty.iitmandi.ac.in/~aditya/cs671/cs671_2017/data/...it is quite di cult to de ne a model for extracting minutiae. Figure:Latent

Autoencoders

Figure: Autoencoder block diagram representation

Avantika Singh (SCEE, IIT-Mandi) [email protected] May 9, 2017

Page 10: Group Sparse Autoencoder for Extracting Latent …faculty.iitmandi.ac.in/~aditya/cs671/cs671_2017/data/...it is quite di cult to de ne a model for extracting minutiae. Figure:Latent

Autoencoders

Figure: Autoencoder block diagram representation

Avantika Singh (SCEE, IIT-Mandi) [email protected] May 9, 2017

Page 11: Group Sparse Autoencoder for Extracting Latent …faculty.iitmandi.ac.in/~aditya/cs671/cs671_2017/data/...it is quite di cult to de ne a model for extracting minutiae. Figure:Latent

Major Application Areas of Autoencoder

Data Denoising

Dimensionality Reduction

Figure: Noisy Images

Figure: Reconstructed Images

Avantika Singh (SCEE, IIT-Mandi) [email protected] May 9, 2017

Page 12: Group Sparse Autoencoder for Extracting Latent …faculty.iitmandi.ac.in/~aditya/cs671/cs671_2017/data/...it is quite di cult to de ne a model for extracting minutiae. Figure:Latent

Regularized Autoencoder

It uses a loss function that encourges the model to have otherproperties besides the ability to copy its input to its output.

These properties include sparsity of the representation, robustness tonoise or to missing values.

Avantika Singh (SCEE, IIT-Mandi) [email protected] May 9, 2017

Page 13: Group Sparse Autoencoder for Extracting Latent …faculty.iitmandi.ac.in/~aditya/cs671/cs671_2017/data/...it is quite di cult to de ne a model for extracting minutiae. Figure:Latent

Regularization

It is the technique used in machine learning to prevent overfitting ofthe data,by preventing the coefficients to fit perfectly.

Figure: Comparision between unregularized and regularized results (Images takenfrom Prof. Alexander Ihler Presentation)

Avantika Singh (SCEE, IIT-Mandi) [email protected] May 9, 2017

Page 14: Group Sparse Autoencoder for Extracting Latent …faculty.iitmandi.ac.in/~aditya/cs671/cs671_2017/data/...it is quite di cult to de ne a model for extracting minutiae. Figure:Latent

Regularization FunctionsRegularizer is defined as

Figure: Different regularization functions (Images taken from Prof. AlexanderIhler Presentation)

Avantika Singh (SCEE, IIT-Mandi) [email protected] May 9, 2017

Page 15: Group Sparse Autoencoder for Extracting Latent …faculty.iitmandi.ac.in/~aditya/cs671/cs671_2017/data/...it is quite di cult to de ne a model for extracting minutiae. Figure:Latent

Regularization Functions

Total loss = Data loss + Regularizer term

Estimate balance between the data term and the regularization term.

Lasso generates sparse solution than a quadratic regularizer.

Figure: (Images taken from Prof. Alexander Ihler Presentation)

Avantika Singh (SCEE, IIT-Mandi) [email protected] May 9, 2017

Page 16: Group Sparse Autoencoder for Extracting Latent …faculty.iitmandi.ac.in/~aditya/cs671/cs671_2017/data/...it is quite di cult to de ne a model for extracting minutiae. Figure:Latent

Group Sparse Autoencoder (GSAE)

GSAE is the supervised version of autoencoder,which utilizes thelabelled information of the data.

The learning is performed such that the features of a single class willhave the same sparsity signature.

This is achieved by incorporating L2,1 norm regularization.

L2,1 norm of a matrix M ∈ Rn×m is given as:

L2,1 norm of a matrix is the sum of the euclidean norms of thecolumns of the matrix.

Avantika Singh (SCEE, IIT-Mandi) [email protected] May 9, 2017

Page 17: Group Sparse Autoencoder for Extracting Latent …faculty.iitmandi.ac.in/~aditya/cs671/cs671_2017/data/...it is quite di cult to de ne a model for extracting minutiae. Figure:Latent

Group Sparse Autoencoder (GSAE)

Let X be the input data as shown in the figure below

Figure: Original Data (Image taken from )

Here, C is the number of classes.

Data is organized such that all the data columns belonging to class 1appear first, followed by data columns of class 2, and so on.

Avantika Singh (SCEE, IIT-Mandi) [email protected] May 9, 2017

Page 18: Group Sparse Autoencoder for Extracting Latent …faculty.iitmandi.ac.in/~aditya/cs671/cs671_2017/data/...it is quite di cult to de ne a model for extracting minutiae. Figure:Latent

Group Sparse Autoencoder (GSAE)The loss function J is defined as

Here X is input data, W is the encoding weights, U is the decodingweights , φ is any non-linear activation function and R(W) is aregularization term,here it is L2,1 norm.

Figure: Block diagramAvantika Singh (SCEE, IIT-Mandi) [email protected] May 9, 2017

Page 19: Group Sparse Autoencoder for Extracting Latent …faculty.iitmandi.ac.in/~aditya/cs671/cs671_2017/data/...it is quite di cult to de ne a model for extracting minutiae. Figure:Latent

Group Sparse Autoencoder (GSAE)

Loss function:

Here the inner L2 norm promotes dense solution within the selectedrows whereas the outer L1 norm enforces sparsity in selecting therows.

In this way regularizer en- forces group sparsity within each class byadding the constraint that the features from the same group/classshould have the similar sparsity signature.

This makes the optimization supervised as the information regardingthe class labels is utilized during training, which further makes GSAEa supervised version of autoencoder.

Avantika Singh (SCEE, IIT-Mandi) [email protected] May 9, 2017

Page 20: Group Sparse Autoencoder for Extracting Latent …faculty.iitmandi.ac.in/~aditya/cs671/cs671_2017/data/...it is quite di cult to de ne a model for extracting minutiae. Figure:Latent

Majorization and Minimization

The original loss function J is defined as

This function is non-convex in nature so we can split it into two partsas:

Step 1 is simple linear least squares regression problem.It can beeasily solved.

For solving Step 2 majorization and minimization technique is used.

Avantika Singh (SCEE, IIT-Mandi) [email protected] May 9, 2017

Page 21: Group Sparse Autoencoder for Extracting Latent …faculty.iitmandi.ac.in/~aditya/cs671/cs671_2017/data/...it is quite di cult to de ne a model for extracting minutiae. Figure:Latent

Majorization and Minimization

Majorization Step Let, J(x) is the function to be minimized. Startwith an initial point (at k=0) xk . A smooth function Gk(x) isconstructed through xk which has a higher value than J(x) for allvalues of x apart from xk , at which the values are the same.

Figure: Original Function(Image taken from [2] )

Avantika Singh (SCEE, IIT-Mandi) [email protected] May 9, 2017

Page 22: Group Sparse Autoencoder for Extracting Latent …faculty.iitmandi.ac.in/~aditya/cs671/cs671_2017/data/...it is quite di cult to de ne a model for extracting minutiae. Figure:Latent

Majorization and Minimization

The function Gk(x) is constructed such that it is smooth and easy tominimize.

Figure: Majorization(Image taken from [2] )

Avantika Singh (SCEE, IIT-Mandi) [email protected] May 9, 2017

Page 23: Group Sparse Autoencoder for Extracting Latent …faculty.iitmandi.ac.in/~aditya/cs671/cs671_2017/data/...it is quite di cult to de ne a model for extracting minutiae. Figure:Latent

Majorization and Minimization

Minimization Step At each step, minimize Gk(x) to obtain the nextiterate xk+1. A new Gk+1(x) is constructed through xk + 1 which isnow minimized to obtain the next iterate xk+2. In this way finalsolution is obtained iteratively.

Figure: Original Function(Image taken from [2])

Avantika Singh (SCEE, IIT-Mandi) [email protected] May 9, 2017

Page 24: Group Sparse Autoencoder for Extracting Latent …faculty.iitmandi.ac.in/~aditya/cs671/cs671_2017/data/...it is quite di cult to de ne a model for extracting minutiae. Figure:Latent

Proposed System

Figure: Proposed System Block diagram(Image taken from [1])

Avantika Singh (SCEE, IIT-Mandi) [email protected] May 9, 2017

Page 25: Group Sparse Autoencoder for Extracting Latent …faculty.iitmandi.ac.in/~aditya/cs671/cs671_2017/data/...it is quite di cult to de ne a model for extracting minutiae. Figure:Latent

Experimental Results

Proposed system accuracy is verified on two publicly available latentfingerprint datasets namely: NIST SD-27 and MOLF.

Here CCA is Correct Classification Accuracy, MDA is MinutiaDetection Accuracy and NMDA is Non-Minutia Detection Accuracy.

Figure: Classification results obtained on the NIST SD-27 and MOLF latentfingerprint datasets(Image taken from [1])

Avantika Singh (SCEE, IIT-Mandi) [email protected] May 9, 2017

Page 26: Group Sparse Autoencoder for Extracting Latent …faculty.iitmandi.ac.in/~aditya/cs671/cs671_2017/data/...it is quite di cult to de ne a model for extracting minutiae. Figure:Latent

Conclusion

A novel supervised regularization method for autoenocoders usingL2,1 norm is proposed (GSAE) which utilizes class labels to learnsupervised features for the specific task at hand.

Using GSAE, an automatic latent fingerprint minutia extractionalgorithm is formulated as a binary classification algorithm.

Avantika Singh (SCEE, IIT-Mandi) [email protected] May 9, 2017

Page 27: Group Sparse Autoencoder for Extracting Latent …faculty.iitmandi.ac.in/~aditya/cs671/cs671_2017/data/...it is quite di cult to de ne a model for extracting minutiae. Figure:Latent

References

Anush Sankaran, Mayank Vatsa , Richa Singh and Angshul Majumdar,G roup sparse autoencoder,Image and Vision Computing,volume 60,year 2017,

Vanika Singhal, and Angshul Majumdar, Majorization andMinimization technique for optimally solving Deep Dictionarylearning,Neural Process Letters.,year 2017

Avantika Singh (SCEE, IIT-Mandi) [email protected] May 9, 2017

Page 28: Group Sparse Autoencoder for Extracting Latent …faculty.iitmandi.ac.in/~aditya/cs671/cs671_2017/data/...it is quite di cult to de ne a model for extracting minutiae. Figure:Latent

THANK YOU

Avantika Singh (SCEE, IIT-Mandi) [email protected] May 9, 2017