6
1 Abstract This paper proposes a single image-based face liveness detection method for discriminating 2-D paper masks from the live faces. Still images taken from live faces and 2-D paper masks were found to bear the differences in terms of shape and detailedness. In order to effectively employ such differences, we exploit frequency and texture information by using power spectrum and Local Binary Pattern (LBP), respectively. In the experiments, three liveness detectors utilizing the power spectrum, LBP, and fusion of the two were trained and tested with two databases which consist of images taken from live and four types of 2-D paper masks. One database was acquired from a web camera while the other was from the camera on the automated teller machine. Experimental results show that the proposed methods can efficiently classify 2-D paper masks and live faces. 1. Introduction As the security concerns are on the rise in various areas, surveillance systems exploiting the face information are expanding their domain. Such systems include face recognition systems for gate control applications [1] or facial occlusion detection built inside the automated teller machines (ATMs) [2]. It is well known that face recognition systems try to validate whether the subject matches a previously-enrolled facial image for identifying the users, while facial occlusion detection systems check the presence of any recognizable facial images for the following criminal investigations [2]. These techniques, in turn, may put themselves in a vulnerable position when attacks are made by those who use the fake faces (masks). In consequence, attempts to attack or capacitate such systems using fake faces have begun to be reported in the news media [3]. Such attacks can largely be categorized into three types: 2-D paper mask attacks, video attacks, and 3-D fake face (mask) attacks. Among them, 2-D paper masks are thought to be the easiest to be used as an attacking measure and thus relevant studies have been carried out in order to suppress those attacks [4, 5, 6, 7]. Countermeasures to defend the systems from 2-D paper mask attacks can be divided into two approaches regarding the users’ cooperation; intrusive approach requires the cooperation of the users while non-intrusive approach proceeds unnoticeably. Due to the fact that the non-intrusive approach provides better convenience for the users, researches on this approach are being conducted more widely. Methods based on the non-intrusive approach can again be categorized into multiple image-based methods and single image-based methods. While the multiple image-based methods take 3-D facial information [8] or eye-blinking information [5] into account, the single image-based ones deal with the innate characteristics on the still images taken from live faces and masks [4, 6]. Between the two, the methods that exploit the single images have advantages over the multiple image-based methods in that the former can be employed whether or not a series of images is acquired. Single image-based methods discriminate the live faces from the masks by either decomposing given images into necessary image components [4] or transforming them into the frequency domain [6]. In this paper, we propose a single image-based fake face detection method based on frequency and texture analyses for discriminating 2-D paper masks from the live faces. For the frequency analysis, we have carried out power spectrum based method [9] which exploits not only the low frequency information but also the information residing in the high frequency regions. Moreover, widely used Local Binary Pattern (LBP) [10] based description method has been employed for analyzing the textures on the given facial images. In addition, the fused information of the decision values from the frequency based classifier and the texture based classifier has also been utilized for detecting the fake faces. Previous fake face detection methods which try to supplement only the face recognition systems dealt with Face Liveness Detection Based on Texture and Frequency Analyses Gahyun Kim 1 , Sungmin Eum 1 , Jae Kyu Suhr 2 , Dong Ik Kim 1 , Kang Ryoung Park 3 and Jaihie Kim 1 1 School of Electrical and Electronic Engineering, Yonsei University, Republic of Korea {ghrapture, eumsungmin, godsknight15,jhkim}@yonsei.ac.kr 2 Research Institute of Automotive Electronics and Control, Hanyang University, Republic of Korea [email protected] 3 Division of Electronics and Electrical Engineering, Dongguk University, Republic of Korea [email protected]

Face Liveness Detection Based on Texture and Frequency ...web.yonsei.ac.kr/jksuhr/papers/Face Liveness Detection Based on... · 1 Abstract This paper proposes a single image-based

  • Upload
    buimien

  • View
    238

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Face Liveness Detection Based on Texture and Frequency ...web.yonsei.ac.kr/jksuhr/papers/Face Liveness Detection Based on... · 1 Abstract This paper proposes a single image-based

1

Abstract

This paper proposes a single image-based face liveness detection method for discriminating 2-D paper masks from the live faces. Still images taken from live faces and 2-D paper masks were found to bear the differences in terms of shape and detailedness. In order to effectively employ such differences, we exploit frequency and texture information by using power spectrum and Local Binary Pattern (LBP), respectively. In the experiments, three liveness detectors utilizing the power spectrum, LBP, and fusion of the two were trained and tested with two databases which consist of images taken from live and four types of 2-D paper masks. One database was acquired from a web camera while the other was from the camera on the automated teller machine. Experimental results show that the proposed methods can efficiently classify 2-D paper masks and live faces.

1. Introduction As the security concerns are on the rise in various areas,

surveillance systems exploiting the face information are expanding their domain. Such systems include face recognition systems for gate control applications [1] or facial occlusion detection built inside the automated teller machines (ATMs) [2].

It is well known that face recognition systems try to validate whether the subject matches a previously-enrolled facial image for identifying the users, while facial occlusion detection systems check the presence of any recognizable facial images for the following criminal investigations [2]. These techniques, in turn, may put themselves in a vulnerable position when attacks are made by those who use the fake faces (masks). In consequence, attempts to attack or capacitate such systems using fake faces have begun to be reported in the news media [3]. Such attacks can largely be categorized into three types: 2-D paper mask attacks, video attacks, and 3-D fake face (mask) attacks. Among them, 2-D paper masks are thought to be

the easiest to be used as an attacking measure and thus relevant studies have been carried out in order to suppress those attacks [4, 5, 6, 7].

Countermeasures to defend the systems from 2-D paper mask attacks can be divided into two approaches regarding the users’ cooperation; intrusive approach requires the cooperation of the users while non-intrusive approach proceeds unnoticeably. Due to the fact that the non-intrusive approach provides better convenience for the users, researches on this approach are being conducted more widely. Methods based on the non-intrusive approach can again be categorized into multiple image-based methods and single image-based methods. While the multiple image-based methods take 3-D facial information [8] or eye-blinking information [5] into account, the single image-based ones deal with the innate characteristics on the still images taken from live faces and masks [4, 6]. Between the two, the methods that exploit the single images have advantages over the multiple image-based methods in that the former can be employed whether or not a series of images is acquired. Single image-based methods discriminate the live faces from the masks by either decomposing given images into necessary image components [4] or transforming them into the frequency domain [6].

In this paper, we propose a single image-based fake face detection method based on frequency and texture analyses for discriminating 2-D paper masks from the live faces. For the frequency analysis, we have carried out power spectrum based method [9] which exploits not only the low frequency information but also the information residing in the high frequency regions. Moreover, widely used Local Binary Pattern (LBP) [10] based description method has been employed for analyzing the textures on the given facial images. In addition, the fused information of the decision values from the frequency based classifier and the texture based classifier has also been utilized for detecting the fake faces.

Previous fake face detection methods which try to supplement only the face recognition systems dealt with

Face Liveness Detection Based on Texture and Frequency Analyses

Gahyun Kim1, Sungmin Eum1, Jae Kyu Suhr2, Dong Ik Kim1, Kang Ryoung Park3 and Jaihie Kim1

1 School of Electrical and Electronic Engineering, Yonsei University, Republic of Korea {ghrapture, eumsungmin, godsknight15,jhkim}@yonsei.ac.kr

2 Research Institute of Automotive Electronics and Control, Hanyang University, Republic of Korea

[email protected] 3 Division of Electronics and Electrical Engineering, Dongguk University, Republic of Korea

[email protected]

Page 2: Face Liveness Detection Based on Texture and Frequency ...web.yonsei.ac.kr/jksuhr/papers/Face Liveness Detection Based on... · 1 Abstract This paper proposes a single image-based

2

2-D paper masks generated by photo-printing on photographic papers or conventionally printing on general printing papers. However, the proposed method expands its scope by considering both the face recognition and facial occlusion detection systems. Therefore, in the experiments, attacks made by using faces on the magazines or caricature images are also taken into account.

2. Difference between images taken from live faces and 2-D paper masks

Still images taken from live faces and 2-D paper masks were found to bear the following two differences. First, there is a difference in 3-D shapes. While live faces manifest 3-D shape variance, 2-D paper masks show rather flat surfaces. Secondly, the detailedness in the image is another difference that can be seen. Notice that unlike the images taken from live faces, those taken from 2-D paper masks tend to lose their detailedness because the masks themselves are already a printed outcome (result) of the photos which were taken beforehand. That is, in order to obtain a still image of a 2-D paper mask, the original object goes through multiple procedures of capturing, printing and recapturing.

Based on the differences mentioned above, we exploit frequency and texture information in discriminating the images taken from live faces and those taken from 2-D paper masks. Using the frequency information carries significance in two reasons. First, the difference in the existence of 3-D shapes leads to the difference in the low frequency regions which is closely related to the illuminance component induced by overall shape of a face. Secondly, the difference in the detailedness between the lives and the masks triggers the disparity in the high frequency information [4, 6]. Meanwhile, the texture information also has its own advantages in discriminating the masks from the live faces in two aspects. The images taken from the 2-D objects (especially, the illuminance components) tend to suffer from the loss of texture richness compared to the images taken from the 3-D objects [7]. Moreover, the difference in the detailedness brings the difference in the micro-texture.

3. Proposed method

3.1. Frequency-based feature extraction Extracting the frequency information from the given

facial images proceeds as follows. First of all, given facial image is transformed into the frequency domain by using the 2-D discrete Fourier transform. The original facial image is depicted in Figure 1(a) and the log-scale magnitude of Fourier-transformed image is shown in Figure 1(b). Note that the Fourier-transformed result is shifted so that the zero-frequency component lies in the center of the spectrum.

The transformed result is then divided into several groups in the form of concentric rings. The radius difference between each pair of the neighboring rings is set as 1. A set of 32 concentric rings are then generated from an image with the size of 64(width)×64(height). Each ring represents a corresponding region in the frequency band. That is, a ring with small radius would contain the low frequency information of the given image.

Lastly, 1-D feature vector can be acquired by concatenating the average energy values of all the concentric rings. Since the average values vary in large amount for the different regions of frequency components (concentric rings), min-max normalization [11] has been employed. The resulting frequency feature can be obtained as shown in Figure 1(c).

3.2. Texture-based feature extraction For analyzing the texture characteristics of the images

taken from the live faces and masks, this paper utilizes the LBP [10]. LBP is one of the most popular methods to describe the texture information of the images. As shown in Eq. (1), LBP assigns a code for each pixel by considering the relative intensity differences between the pixel and its neighbors.

1

,0

1, 0( )2 , ( ) (1)

0, 0

Pp

P R p cp

xLBP s g g s x

x

=

≥⎧= − = ⎨ <⎩∑

Figure 1: Frequency-based feature extraction (a) original facialimage (b) Log-scale magnitude of the Fourier-transformed image (power spectrum) (c) 1-D frequency feature vector extracted fromthe normalized power spectrum

(a) (b) (c)

Figure 2: Feature vector extraction process based on LBP (a)original facial image (b) LBP-coded image (c) histogram of theLBP-coded image.

(a) (b) (c)

Page 3: Face Liveness Detection Based on Texture and Frequency ...web.yonsei.ac.kr/jksuhr/papers/Face Liveness Detection Based on... · 1 Abstract This paper proposes a single image-based

3

where P is the number of neighboring pixels, R is the distance from the center to the neighboring pixels. While gc corresponds to the grayscale value of the center pixel, gp corresponds to the grayscale values of the p equally spaced pixels on the circle of radius R, and s(x) is the threshold function of x. Figure 2 describes on how to employ the neighboring pixels regarding the P and R. The use of various values for P and R enabled the analysis of the multiresolution texture. This paper utilized the uniform LBP, and P and R were set to 8 and 1, respectively.

Figure 3 depicts the process of acquiring the LBP feature vector from a given facial image. Figure 3(a) is the original facial image while Figure 3(b) shows the LBP-coded image of Figure 3(a). The histogram of Figure 3(b) is shown in Figure 3(c) which will be exploited as the feature vector for the classification.

3.3. Fusion-based feature extraction This paper utilizes Support Vector Machine (SVM)[12] classifier to learn liveness detectors with the feature vectors generated by power spectrum-based and LBP-based methods. The fusion-based method extracts a feature vector by combining the decision value of SVM classifier trained by power spectrum-based feature vectors and that of SVM classifier trained by LBP-based feature vectors [14]. These two decision values produced by different feature extraction methodologies are concatenated as a 2-D feature vector, and used for training a fusion-based liveness detector.

4. Experiments

4.1. Database In this paper, two different databases have been used in

the experiments: BERC Webcam Database (hereafter, Webcam Database) and BERC ATM Database (hereafter, ATM Database). Each database contains a set of live face images along with 4 different sets of fake face images (photo, print, magazine, and caricature). While all the images in both of the databases were acquired with the same size of 640(width)×480(height), the apparatus used in acquiring the images was different. The images included in the Webcam Database was captured with the conventional web camera while those in ATM Database was acquired using the built-in camera on the ATM [2]. Figure 4 shows the sample images found in Webcam

Figure 4: Sample images of the Webcam Database and the ATM Database.

Figure 3: Feature vector extraction process based on LBP (a) original facial image (b) LBP-coded image (c) histogram of theLBP-coded image.

(a) (b) (c)

Page 4: Face Liveness Detection Based on Texture and Frequency ...web.yonsei.ac.kr/jksuhr/papers/Face Liveness Detection Based on... · 1 Abstract This paper proposes a single image-based

4

Table 1. The number of sequences included in the databases Live Photo Print Magazine Caricature

Webcam DB 210 360 360 360 360

ATM DB 240 240 240 240 240 Table 2. The number of images in each database

Live Photo Print Magazine Caricature Web-cam DB

Train 683 1245 1086 1064 233 Test 725 1296 1238 1064 235

ATM DB

Train 897 949 665 713 471 Test 900 1049 797 796 362

Database and ATM Database. As shown in Figure 4, the main difference between the two databases lies in the resolution of the images. Since a transparent plastic cover is originally built in front of the ATM camera for protection purpose, the images in the ATM database show relatively lower resolution than those in the Webcam database.

All the images (either live or fake) in Webcam Database were captured under 3 different illumination conditions: indoor without any additional lights, strong additional light towards the front and strong additional light directed from the side. Live face images were obtained from 25 subjects, whereas each type of the fake face images contains 120 different kinds of relevant fake faces (masks). The details on the 4 types of fake face images in Webcam Database are as follows. ∙ Photo: The fake faces were generated by printing out the photographs of 20 subjects. The photos were taken under 3 different illumination conditions: indoor without any additional lights, strong additional light towards the fontal face, and strong additional light directed from the side. All the photos were developed on the photographic papers with the sizes of ‘10.2cm × 15.2cm’ and ‘29.7cm × 21cm’ using the conventional method. ∙ Print: Identical set of images which were used in generating the ‘photo’ were printed out on ordinary printing papers using a color laser printer. ∙ Magazine: 60 face images with the size of ‘5~8cm’ and 60 with the size of ‘9~14cm’ were obtained from the magazines. Note that we have measured the size of a face as the distance between upper line of the eyebrows and the bottom line of the lips. ∙ Caricature: 60 kinds of caricature images were obtained from the web. The images were printed in two different sizes of ‘5~8cm’ and ‘9~14cm’ using the color laser printer.

Unlike Webcam Database, all the images (either live or fake) contained in ATM Database were acquired under only the ordinary indoor lighting condition. Although the indoor lighting is the only external illumination applied on the fake faces, the photos and the prints already contain the illumination effects which were acquired when the

images were taken beforehand. While live face images were obtained from 20 subjects, fake face images were constructed by using the identical sets of fake faces used in acquiring the Webcam Database.

Facial occlusion detection systems in recent studies [2, 15], devised to help prevent the ATM-related crimes, manifest their vulnerability in that they do not consider the fake faces. Therefore, we have constructed the ATM database with fake faces showing no facial occlusions using the conventional ATM. Using the ATM database, we have showed that the proposed method can detect the fake faces with reasonable performance thus gaining the capability of compensating the defects of the facial occlusion systems which are likely to grant the intruders with fake faces.

As shown in Table 1, the Webcam Database and the ATM Database each includes 1650 and 1200 sequences, respectively. Each sequence consists of 10 single images.

After employing face detection process on all the images in Webcam Database and ATM Database, only the detected regions were cropped and categorized into train and test sets. These facial regions were normalized by using automatically detected two eyes and a mouth. Table 2 shows the number of faces in train and test sets extracted from the two databases. Note that the train and test sets are constructed to be mutually exclusive.

4.2. Experimental results The performances of the three proposed methods have

been evaluated using the Webcam Database and the ATM Database. Using the training database shown in Table 2, the fake face detector has been obtained by employing the SVM classification method. For the SVM classifier, RBF kernel has been applied throughout the experiments while the relevant parameters (cost and gamma) were optimized using the Genetic Algorithm [13]. The training database includes the images of live faces and those of 2-D paper masks in four types (photo, print, magazine, and caricature). From the Webcam Database, 683 and 3628 images of live and 2-D paper masks, respectively, were used in the training. Meanwhile, for the ATM Database, 897 images of live faces and 2798 images of 2-D paper masks were extracted to be included in the train set. Note

Figure 5: ROC curves showing the performances of the three proposed methods using (a) the Webcam Database (b) the ATM Database

(a) (b)

Page 5: Face Liveness Detection Based on Texture and Frequency ...web.yonsei.ac.kr/jksuhr/papers/Face Liveness Detection Based on... · 1 Abstract This paper proposes a single image-based

5

Table 3. The performances for Webcam Database at EER points. Frequency LBP Fusion

Live face 88.13% 88.42% 91.57%

Paper mask

Photo 90.82% 88.89% 93.91% Print 86.11% 88.05% 90.31% Mag 88.63% 86.47% 90.60% Caric 81.70% 96.60% 89.36%

EER 11.87% 11.58% 8.43% Table 4. The performances for ATM Database at EER points.

Frequency LBP FusionLive face 94.57% 87.54% 95.58%

Paper mask

Photo 94.76% 89.71% 97.43% Print 98.62% 87.45% 97.87% Mag 91.46% 81.28% 91.83% Caric 91.99% 95.03% 93.65%

EER 5.43% 12.46% 4.42% that in the actual application, the type of the mask is rather unknown. Therefore, all four types of fake face images have been included in a single train set in order to devise a single fake face detector.

Figure 5 (a) depicts the Receiver Operating Characteristics (ROC) curves which describe the performance of the three proposed methods with the Webcam Database. Gray line, black dotted-line, and black line each indicates the results of the frequency-based, LBP-based and the fusion-based method, respectively. As shown in the figure, frequency-based and LBP-based methods have performed with similar equal error rate (EER) of 11.87% and 11.58%, respectively. Meanwhile, the fusion-based method has shown the best performance with the EER of 8.43%.

In Table 3, the performances for each type of the masks (Webcam Database) at the EER points are listed. When photo and magazine are used, the frequency-based method performed better than the LBP-based one. Such result showing that LBP-based method falls behind the frequency-based one seems to have been caused by the less texture differences between the images of live faces and those of photo and magazine. That is, when photos and magazines with fine details are presented, frequency-based method which handles both the low (shape related) and high (detail related) frequency information shows better performance than the LBP-based method which is rather strong with the texture differences.

It is noteworthy to mention that low frequency information contains valuable information in discriminating the live face images and paper mask images since it is closely related to the shape of the object. Moreover, when an image is taken apart into the illuminance part and the reflectance part, the former includes the shape-related information in the form of surface normal.

Since the live face images and the paper mask images are originally acquired from 3-D and 2-D objects, respectively, they share difference in the surface normal

portion of the illuminance part. The difference can apparently be found in the images reconstructed with only the low frequency components. Figure 6(b) and (d) depicts the reconstructed images of live face and paper mask images shown in Figure 6(a) and (c), respectively. These reconstructed images only contain the low frequency components that correspond to the first 10 frequency indexes.

Figure 6(e) and (f) show the profile view of the 3-D intensity images of the reconstructed images. In this figure, horizontal axis and vertical axis represent y-axis in an image and the intensity value, respectively. As can be seen in these figures, the reconstructed face of the paper mask image accommodates less variation than that of the live face image. In short, the fake face images are more likely to be reconstructed having nearly flat shape throughout the whole face area in contrast to the live ones. These differences between the live face and paper mask brought about a relatively high discriminate power in the low frequency band.

On the other hand, LBP-based method is more desirable than the frequency-based method when images captured from prints and caricatures are employed. It seems that such result is due to the fact that masks of prints and caricatures contain prominent differences in texture. It is noticeable that this phenomenon is reflected more with the caricature masks. In the meantime, it is apparent that the fusion-based method carries both of the advantages from the frequency and the LBP. This, in turn, seems to have brought the best overall results among the three methods.

Figure 5 (b) depicts the ROC curves showing the performance of the three proposed method using the ATM Database. Gray line, black dotted-line, and black line each indicates the results of the frequency-based, LBP-based and the fusion-based method, respectively. As shown in

Figure 6: (a) live face image (b) (a) reconstructed by low frequency components (c) fake face image (d) (c) reconstructed by low frequency components (e) 3-dimensional intensity image of (b) (profile view) (f) 3-dimensional intensity image of (d)(profile view)

(a) (b) (c) (d)

(e) (f)

Page 6: Face Liveness Detection Based on Texture and Frequency ...web.yonsei.ac.kr/jksuhr/papers/Face Liveness Detection Based on... · 1 Abstract This paper proposes a single image-based

6

the figure, the frequency-based and the LBP-based performed with the EERs of 5.43% and 12.46%, respectively, while the fusion-based method achieved the highest of 4.42%.

The performances for each type of the masks (ATM Database) at the EER points are included in Table 4. The performances of the frequency-based and the LBP-based methods show similar trend as in the Webcam Database. However, improvement in the frequency-based method and degradation in the LBP-based can be observed. The improvement with the frequency-based method seems to result from the ATM Database which manifests almost no variation in the lighting condition compared to the Webcam Database. Less variation in the lighting condition is likely to bring less variation in the illuminance component of the image. This can maintain the low frequency information with small variation thus with better performance.

On the other hand, the performance with the LBP-based method dropped. The reason seems to lie in the usage of the actual ATM which has a plastic cover located in front of the camera. This cover is likely to decrease the amount of incoming lights towards the camera. Less light triggers the automatic gain control to increase the gain along with the unwanted noise amplification. Due to the amplified noise added onto the image, texture information can be damaged and bring about the degraded performance with the LBP-based method.

5. Conclusion This paper proposed a single image-based liveness

detection method for discriminating 2-D paper masks from the live faces. In order to employ the differences in shapes and detailedness between live and fake facial images, frequency and texture information is exploited by using power spectrum and LBP, respectively. Experimental results show that the proposed methods can efficiently classify various 2-D paper masks and live faces. In the future, we are planning to expand our research on liveness detection which deals with the attacks using video and 3-D fake faces. We would also like to carry out additional experiments using publicly available databases and compare the proposed method with previous works.

Acknowledgement This research was supported by a grant from the R&D

Program(Industrial Strategic Technology Development) funded by the Ministry of Knowledge Economy(MKE), Republic of Korea. Also, The authors are deeply thankful to all interested persons of MKE and KEIT(Korea Evaluation Institute of Industrial Technology) (10040018, Development of 3D Montage Creation and Age-specific Facial Prediction System).

References [1] X. Tan, S. Chen, Z. H Zhou, and F. Zhang. Face Recognition

from a Single Image per Person: A Survey. Pattern Recognition, 39(9):1725-1745, 2006.

[2] S. Eum, J. K. Suhr, and J. Kim. Face Recognizability Evaluation for ATM Applications with Exceptional Occlusion Handling. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2011:82-89.

[3] D. Ngo. Vietnamese security firm: Your face is easy to fake [Online], 2008. Available: http://news.cnet.com/8301-1793 8_105-10110987-1.html.

[4] X. Tan, Y. Li, J. Liu, and L. Jiang. Face Liveness Detection from a Single Image with Sparse Low Rank bilinear Discriminative Model. In Proceeding of the European Conference on Computer Vision 2010 (6):504-517.

[5] G. Pan, L. Sun, Z. Wu, and Y. Wang. Monocular Camera-based Face Liveness Detection by Combining Eyeblink and Scene Context. Journal of Telecommunication Systems, 47(3-4): 215-225, 2009.

[6] J. Li, Y. Wang, T. Tan, and A. Jain. Live Face Detection Based on the Analysis of Fourier Spectra. SPIE, Biometric Technology for Human Identification, 5404: 296-303, 2004.

[7] J. Bai, T.-T. Ng, X. Gao, and Y.-Q. Shi. Is Physics-based Liveness Detection Truly Possible with a Single Image? In Proceedings of IEEE International Symposium on Circuits and Systems 2010:3425-3428.

[8] K. Kollreider, H. Fronthaler, and J. Bigun. Non-intrusive Liveness Detection by Face Images. Image and Vision Computing, 27(3):233-244, 2009.

[9] H. S. Choi, R. C. Kang, K.T. Choi, A. T. B. Jin, and J.H. Kim. Fake-Fingerprint Detection using Multiple Static Features. Optical Engineering, 48(4), 2009.

[10] T. Ojala, and M. Pietikainen. Multiresolution Gray-Scale and Rotation Invariant Texture Classification with Local Binary Patterns. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(7): 971-987, 2002.

[11] A. K. Jain, K. Nandakumar, and A. Ross. Score normalization in multimodal biometric systems. Pattern recognition, 38(12): 2270-2285, 2005.

[12] C.-C. Chang, and C.-J. Lin. LIBSVM: a library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2:27:1--27:27, 2011. Software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm.

[13] Ho Gi Jung and Jaihie Kim. Constructing a Pedestrian Recognition System with a Public Open Database, without the Necessity of Re-training: an Experimental Study. Pattern Analysis and Applications, 13(2):223-233, 2010.

[14] B. Waske, J. A. Benediktsson. Fusion of Support Vector Machines for Classification of Multisensor Data. IEEE Transactions on Geoscience and Remote Sensing, 45(12):3858-3866, 2007.

[15] J. K. Suhr, S. Eum, H. G. Jung, G. Li, G. Kim and J. Kim. Recognizability assessment of facial images for automated teller machine applications. Pattern Recognition, Published online, 2011.