85
High speed face recognition using DCT and Neural networks 1 CHAPTER 1 INTRODUCTION 1.1 Introduction to face recognition Face recognition by humans is a high level visual task for which it has been extremely difficult to construct detailed neurophysiological and psychophysical models. This is because faces are complex natural stimuli that differ dramatically from the artificially constructed data often used in both human and computer vision research. Thus, developing a computational approach to face recognition can prove to be very difficult indeed. In fact, despite the many relatively successful attempts to implement computerbased face recognition systems, we have yet to see one which combines speed, accuracy, and robustness to face variations caused by 3D pose, facial expressions, and aging. The primary difficulty in analyzing and recognizing human faces arises because variations in a single face can be very large, while variations between different faces are quite small. That is, there is an inherent structure to a human face, but that structure exhibits large variations due to the presence of a multitude of muscles in a particular face. Given that recognizing faces is critical for humans in their everyday activities, automating this process would be very useful in a wide range of applications including security, surveillance, criminal identification, and video compression. This paper discusses a new computational approach to face Dept. Of E.C.E Sri Krishnadevaraya Engg. College,Gooty.

DOCUMENTATION

Embed Size (px)

Citation preview

High speed face recognition using DCT and Neural networks

1

CHAPTER 1 INTRODUCTION1.1 Introduction to face recognitionFace recognition by humans is a high level visual task for which it has been extremely difficult to construct detailed neurophysiological and psychophysical models. This is because faces are complex natural stimuli that differ dramatically from the artificially constructed data often used in both human and computer vision research. Thus, developing a computational approach to face recognition can prove to be very difficult indeed. In fact, despite the many relatively successful attempts to implement computerbased face recognition systems, we have yet to see one which combines speed, accuracy, and robustness to face variations caused by 3D pose, facial expressions, and aging. The primary difficulty in analyzing and recognizing human faces arises because variations in a single face can be very large, while variations between different faces are quite small. That is, there is an inherent structure to a human face, but that structure exhibits large variations due to the presence of a multitude of muscles in a particular face. Given that recognizing faces is critical for humans in their everyday activities, automating this process would be very useful in a wide range of applications including security, surveillance, criminal identification, and video compression. This paper discusses a new computational approach to face recognition that, when combined with proper face localization techniques, has proved to be very efficacious. This section begins with a survey of the face recognition research performed to date. The proposed approach is then presented along with its objectives and the motivations for choosing it. The section concludes with an overview of the structure of the paper.

Face Recognition A facial recognition system is a computer-driven application for automatically identifying a person from a digital image. It does that by comparing selected facial features in the live image and a facial database. It is typically used for security systems and can be compared to other biometrics such as fingerprint or eye iris recognition systems. The great advantage of facial recognition system is that it does not require aid from the test subject. Properly

Dept. Of E.C.E

Sri Krishnadevaraya Engg. College,Gooty.

High speed face recognition using DCT and Neural networks

2

designed systems installed in airports, multiplexes and other public places can detect the presence of criminals among the crowd. History of Face Recognition The development and implementation of face recognition systems is totally dependent in the development of computers, since without computers the efficient use of the algorithms is impossible. So the history of face recognition goes side by side with the history of computers. Research in automatic face recognition dates back at least until the 1960s. Bledsoe, in 1966, was the first to attempt semi-automated face recognition with a hybrid human computer system that classified faces on the basis of fiducial marks entered on photographs by hand. Parameters for the classification were normalized distances and ratios among points such as eye corners, mouth corners, nose tip and chin point. Later work at Bell

laboratories(Goldstein, Harmon and Lesk,1971; Harmon, 1971) developed a vector of upto 21 features and recognized faces using standard pattern classification techniques. The chosen features were largely subjective evaluation (e.g. shade of hair, length of ears, lip thickness) made by human subjects, each of which would be difficult to automate. An early paper by Fischler and Elschlager (1973) attempted to measure similar features automatically. They described a linear embedding algorithm that used local feature template matching and a global measure of fit to find and measure facial features. This template matching approach has been continued and improved by recent work of Yuille, Cohen and Hallinan (1989). Their strategy is based on deformable templates, which are parameterized models of the face and its features in which the parameter values are determined by interaction with the image.Connectionalist approach to face identification seeks to capture the configurational or gestate-like nature of the task. Kohonen (1989) and Kohonen and Lahito (1981) describe an associative network with a simple learning algorithm that can recognize (classify) face images and recall a face image from an incomplete or noisy version input to the network. Fleming and Cottrell (1990) extend these ideas using nonlinear units, training the system by back propagation. Stonhams WISARD system (1986) is a general pattern recognition devise based on neutral net principles. It has been applied with some success to binary face images, recognizing both identity and expression. Most connectionist system dealing with faces treat the input image as a general 2-D pattern, and can make no explicit use of the configurational prosperities of face. Moreover, some of these systems require an inordinate number of training examples to achieve a reasonable level of performance. KirbyDept. Of E.C.E Sri Krishnadevaraya Engg. College,Gooty.

High speed face recognition using DCT and Neural networks

3

and Sirovich were among the first to apply principal component analysis (PCA) to face images and showed that PCA is an optimal compression scheme that minimizes the mean squared error between the original images and their reconstructions for any given level of compression. Turk and Pentland popularized the use of PCA for face recognition. They used PCA to compute a set of subspace basis vectors (which they called eigenfaces) for a database of face images and projected the images in the database into the compressed subspace. New test images were then matched to images in the database by projecting them onto the basis vectors and finding the nearest compressed image in the subspace(eigenspace). The initial success of eigenfaces popularized the idea of matching images in compressed subspaces. Researchers began to search for other subspaces that might improve performance. One alternative is Fishers Linear Discriminant Analysis (LDA, a.k.a. fisherfaces). For any N-class classification problem, the goal of LDA is to find the N-1 basis vectors that maximize the interclass distances while minimizing the intraclass distances.

1.2 Outline of a Typical Face Recognition SystemThe acquisition module This is the entry point of the face recognition process. It is the module where the face image under consideration is presented to the system. In other words, the user is asked to present a face image to the face recognition system in this module. An acquisition module can request a face image from several different environments: The face image can be an image file that is located on a magnetic disk, it can be captured by a frame grabber and camera or it can be scanned from paper with the help of a scanner. The pre-processing module In this module, by means of early vision techniques, face images are normalized and if desired, they are enhanced to improve the recognition performance of the system. Some or all of the pre-processing steps may be implemented in a face recognition system The feature extraction module After performing some pre-processing (if necessary), the normalized face image is presented to the feature extraction module in order to find the key features that are going to be used for classification. In other words, this module is responsible for composing a feature vector that is well enough to represent the face image.

Dept. Of E.C.E

Sri Krishnadevaraya Engg. College,Gooty.

High speed face recognition using DCT and Neural networks

4

The classification module In this module, with the help of a pattern classifier, extracted features of the face image is compared with the ones stored in a face library (or face database). After doing this comparison, face image is classified as either known or unknown. Principal component analysis, based on information theory concepts, seeks a computational model that best describes a face, by extracting the most relevant information contained in that face. Eigenfaces approach is a principal component analysis method, in which a small set of characteristic pictures are used to describe the variation between face images. Goal is to find out the eigenvectors (Eigenfaces) of the covariance matrix of the distribution, spanned by a training set of face images. represented by a linear combination of these eigenvectors. Evaluations of these eigenvectors are quite difficult for typical image sizes but, an approximation that is suitable for practical purposes is also presented. Recognition is Later, every face image is

performed by projecting a new image into the subspace spanned by the Eigenfaces and then classifying the face by comparing its position in face space with the positions of known individuals. Eigenfaces approach seems to be an adequate method to be used in face recognition due to its simplicity, speed and learning capability. Experimental results are given to demonstrate the viability of the proposed face recognition method.

1.3 Introduction to Digital Image ProcessingInformation carrying function of time is called signal. Real time signals can be audio(voice) or video(image) signals. Still video is called an image. Moving image is called a video. Difference between digital image processing and signals and systems is that time graph is not there in DIP. X and Y coordinates in DIP are spatial coordinates. Time graph is not there because photo doesnt change with time. What is image? Image : An image is defined as a two dimensional function f(x, y) where x and y are spatial coordinates and the amplitude f at any point (x, y) is known as the intensity of image at that point.

Dept. Of E.C.E

Sri Krishnadevaraya Engg. College,Gooty.

High speed face recognition using DCT and Neural networks

5

What is a pixel? Pixel : A pixel(short for picture element) is a single point in a graphic image. Each such information element is not really a dot, nor a square but an abstract sample. Each element of the above matrix is known as a pixel where dark = 0 and light = 1. A pixel with only 1 bit will represent a black and white image. If the number of bits are increased then the number of gray levels will increase and a better picture quality is achieved. All naturally occurring images are analog in nature. If the number of pixels is more then the clarity is more. An image is represented as a matrix in DIP. In DSP we use only row matrices. Naturally occurring images should be sampled and quantized to get a digital image. A good image should have 1024*1024 pixels which is known as 1k * 1k = 1M pixel.

1.4 Fundamental steps in DIPImage acquition : Digital image acquisition is the creation of digital images typically from a physical object. A digital image may be created directly from a physical scene by a camera or similar device. Alternatively it can be obtained from another image in an analog medium such as photographs, photographic film, or printed paper by a scanner or similar device. Many technical images acquired with tomographic equipment, side-looking radar, or radio telescopes are actually obtained by complex processing of non-image data.

Image enhancement : The process of image acquisition frequently leads to image degradation due to mechanical problems, out-of-focus blur, motion, inappropriate illumination and noise. The goal of image enhancement is to start from a recorded image and to produce the most visually pleasing image.

Image restoration : The goal of image restoration is to start from a recorded image and to produce the most visually pleasing image. The goal of enhancement is beauty. The goal of restoration is truth. The measure of success in restoration is usually an error measure between the original and the estimate image. No mathematical error function is known that corresponds to human perceptual assessment of error.

Dept. Of E.C.E

Sri Krishnadevaraya Engg. College,Gooty.

High speed face recognition using DCT and Neural networks

6

Colour image processing : Colour image processing is based on that any colour can be obtained by mixing 3 basic colours red, green and blue. Hence 3 matrices are necessary each one representing each colour.

Wavelet and multiresolution processing : Many times a particular spectral component occurring at any instant can be of particular interest. In these cases it may be very beneficial to know the time intervals these particular spectral components occur. For example, in EEGs the latency of an event-related potential is of particular interest. Wavelet transform is capable of providing the time and frequency information simultaneously, hence giving a time-frequency representation of the signal. Although the time and frequency resolution problems are results of a physical phenomenon ( the Heisenberg uncertainty principle ) and exist regardless of the transform used, it is possible to any signal by using an alternative approach called the multiresolution analysis (MRA). MRA analyzes the signal at different frequencies with different resolutions. MRA is designed to give good time resolution and poor frequency resolution at high frequencies and good frequency resolution and poor time resolution at low frequencies.

Compression: Image compression is the application of data compression on digital images. Its objective is to reduce redundancy of the image data in order to be able to store or transmit data in an efficient form.

Morphological processing: Morphological processing is a collection of techniques for DIP based on mathematical morphology. Since these techniques rely only on the relative ordering of pixel values not on their numerical values they are especially suited to the processing of binary images and grayscale images.

Segmentation: In the analysis of the objects in images it is essential that we can distinguish between the objects of interest and the rest. This latter group is also referred to as the background. The techniques that are used to find the objects of interest are usually referred to as segmentation techniques.

Dept. Of E.C.E

Sri Krishnadevaraya Engg. College,Gooty.

High speed face recognition using DCT and Neural networks

7

CHAPTER 2 FACE RECOGNITION USING DCT2.1 Discrete Cosine Transformation (DCT)DCT is a well-known signal analysis tool used in compression standards due to its compact representation power. Although Discrete Cosine transform (KLT) is known to be the optimal transform in terms of information packing, its data dependent nature makes it unfeasible for use in some practical tasks. Furthermore, DCT closely approximates the compact representation ability of the KLT, which makes it a very useful tool for signal representation both in terms of information packing and in terms of computational complexity due to its data independent nature. Local Appearance Based Face Representation Local appearance based face representation is a generic local approach and does not require detection of any salient local regions, such as eyes, as in the modular or component based approaches [5, 10] for face representation. Local appearance based face representation can be performed as follows: A detected and normalized face image Implementation of face recognition system is divided into blocks of 8x8 pixels size. Each block is then represented by its DCT coefficients. The reason for choosing a block size of 8x8 pixels is to have smallenough blocks in which stationarity is provided and transform complexity is kept simple on one hand, and to have big enough blocks to provide sufficient compression on the other hand. The top-left DCT coefficient is removed from the representation since it only represents the average intensity value of the block. From the remaining DCT coefficients the ones containing the highest information are extracted via zigzag scan. Fusion To fuse the local information, the extracted features from 8x8 pixels blocks can be combined at the feature level or at the decision level. Feature Fusion In feature fusion, the DCT coefficients obtained from each block are concatenated to construct the feature vector which is used by the classifier.

Dept. Of E.C.E

Sri Krishnadevaraya Engg. College,Gooty.

High speed face recognition using DCT and Neural networks

8

Decision Fusion In decision fusion, classification is done separately on each block and later, the individual classification results are combined. To combine the individual classification results

2.2 DefinitionAhmed, Natarajan, and Rao (1974) first introduced the discrete cosine transform (DCT) in the early seventies. Ever since, the DCT has grown in popularity, and several variants have been proposed (Rao and Yip, 1990). In particular, the DCT was categorized by Wang (1984) into four slightly different transformations named DCT-I, DCT-II, DCT-III, and DCT-IV. Of the four classesWang defined, DCT-II was the one first suggested by Ahmed et al., and it is the one of concern in this paper.

Compression Performance in Terms of the Variance Distribution The Karhunen-Loeve transform (KLT) is a statistically optimal transform based on a number of performance criteria. One of these Criteria Is the variance distribution of transform coefficients. This criterion judges the performance of a discrete transform by measuring its variance distribution for a random sequence having some specific probability distribution function (Rao and Yip, 1990). It is desirable to have a small number of transform coefficients with large variances such that all other coefficients can be discarded with little error in the reconstruction of signals from the ones retained. The error criterion generally used when reconstructing from truncated transforms is the mean-square error (MSE). In terms of pattern recognition, it is noted that dimensionality reduction is perhaps as important an objective as class separability in an application such as face recognition. Thus, a transform exhibiting largevariance distributions for a small number of coefficients is desirable. This is so because such a transform would require less information to be stored and used for recognition. In this respect, as well as others, the DCT has been shown to approach the optimality of the KLT (Pratt, 1991). The variance distribution for the various discrete transforms is usually measured when the input sequence is a stationary first-order Markov process (Markov-1 process). Such a process has an autocovariance matrix of the form Shown In Eq. (2.6) and provides a good model for the scan lines of gray-scale images (Jain, 1989). The matrix in Eq. (2.6) is aToeplitz matrix, which is expected since the process is stationary (Jain, 1989). Thus,

Dept. Of E.C.E

Sri Krishnadevaraya Engg. College,Gooty.

High speed face recognition using DCT and Neural networks

9

the variance distribution measures are usually computed for random sequences of leng N th that result in an auto-covariance matrix of the form:

R= 1 2 .. N1 1 .. N2 N1 N2 . .. 1 correlation coeff. | | < 1 (2.6) Face Recognition Using the Discrete Cosine Transform 171

Figure 2.1 Variance distribution for a selection of discrete transforms

for N = 16 and

= 0.9 (adapted from K.R. Rao and P. Yip, Discrete Cosine Transform

Algorithms, Advantages, Applications, New York: Academic, 1990). Data is shown for the following transforms: discrete cosine transform (DCT), discrete Fourier transform (DFT), slant transform (ST), discrete sine transform (type I) (DST-I), discrete sine transform (type II) (DST-II), and Karhunen-Loeve transform (KLT). Figure 2.1 shows the variance distribution for a selection of discrete transforms given a first-order Markov process of length N = 16 and = 0.9. The data for this curve were obtained directly from Rao and Yip (1990) in whichDept. Of E.C.E Sri Krishnadevaraya Engg. College,Gooty.

High speed face recognition using DCT and Neural networks

10

other curves for different lengths are also presented. The purpose here is to illustrate that the DCT variance distribution, when compared to other deterministic transforms, decreases most rapidly. The DCT variance distribution is also very close to that of the KLT, which confirms its near optimality. Both of these observations highlight the potential of the DCT for data compression and, more importantly, feature extraction.

Comparison with the KLT

The KLT completely decorrelates a signal in the transformdomain, minimizes MSE in data compression, contains the most energy (variance) in the fewest number of transform coefficients, and minimizes the total representation entropy of the input sequence (Rosenfeld and Kak, 1976). All of these properties, particularly the first two, are extremely useful in pattern recognition applications. The computation of the KLT essentially involves the determination of the eigenvectors of a covariance matrix of a set of training sequences (images in the case of face recognition). In particular, given M trainingimages of size, say, N N, the covariance matrix of interest is given by C = A AT (2.7) where A is a matrix

whose columns are the M training images (after having an average face image subtracted from each of them) reshaped into N2-element vectors. Note that because of the size of A, the computation of the eigenvectors of C may be intractable. However, as discussed in Turk and Pentland (1991), because M is usually much smaller than N2 in face recognition, the eigenvectors of C can be obtained more efficiently by computing the eigenvectors of another smaller matrix (see (Turk and Pentland, 1991) for details). Once the eigenvectors of C are obtained, only those with the highest corresponding eigenvalues are usually retained to form the KLT basis set. One measure for the fraction of eigenvectors retained for the KLT basis set is given by = M_ _l=1 l M _l=1 l

Dept. Of E.C.E

Sri Krishnadevaraya Engg. College,Gooty.

High speed face recognition using DCT and Neural networks

11

where l is the lth eigenvalue ofC and M_ is the number of eigenvectors forming the KLT basis set. As can be seen from the definition of C in Eq. (2.7), the KLT basis functions are data-dependent. Now, in the case of a first-order Markov process, these basis functions can be found analytically (Rao and Yip,1990). Moreover, these functions can be shown to be asymptotically equivalent to the DCT basis functions as (Eq. (2.6)) and as N for any given (of Eq. (2.6)) 1 for any given N

(Rao and Yip, 1990). It is this asymptotic

equivalence that explains the near optimal performance of the DCT in terms of its variance distribution for first-order Markov processes. In fact, this equivalence also explains the near optimal performance of the DCT based on a handful of other criteria such as energy packing efficiency, residual correlation, and mean-square error in estimation (Rao and Yip, 1990). This provides a strong justification for the use of the DCT for face recognition. Specifically, since the KLT has been shown to be very effective in face recognition (Pentland et al., 1994), it is expected that a deterministic transform that is mathematically related to it would probably perform just as well in the same application. 172 Hafed and Levine As for the computational complexity of the DCT and KLT, it is evident from the above overview that theKLT requires significant processing during training, since its basis set is data-dependent. This overhead in computation, albeit occurring in a non-time-critical off-line training process, is alleviated with the DCT. As for online feature extraction, the KLT of an N N image can be computed in O(M_N2) time where M_ is the number of KLT basis vectors. In comparison, the DCT of the same image can be computed in O(N2log2N) time because of its relation to the discrete Fourier transformwhich can be implemented efficiently using the fast Fourier transform (Oppenheim and Schafer, 1989). This means that the DCT can be computationally more efficient than the KLT depending on the size of the KLT basis set.2 It is thus concluded that the discrete cosine transform is very well suited to application in face recognition. Because of the similarity of its basis functions to those of theKLT, theDCTexhibits striking feature extraction and data compression capabilities. In fact, coupled with these, the ease and speed of the computation of theDCT may even favor it over the KLT in face recognition.

Dept. Of E.C.E

Sri Krishnadevaraya Engg. College,Gooty.

High speed face recognition using DCT and Neural networks

12

CHAPTER 3 FACE NORMALIZATION AND RECOGNITION3.1 Basic AlgorithmThe face recognition algorithm discussed in this paper is depicted in Fig. 3.1 It involves both face normalization and recognition. Since face and eye localization is not performed automatically, the eye coordinates of the input faces need to be entered manually in order to normalize the faces correctly. This requirement is not a major limitation because the algorithm can easily be invoked after running a localization system such as the one presented in Jebara (1996) or others in the literature. As can be seen from Fig 3.2, the system receives as input an image containing a face along with its eye coordinates. It then executes both geometric and illumination normalization functions as will be described later. Once a normalized (and cropped) face is obtained, it can be compared to other faces, under the same nominal size, orientation, position, and illumination conditions. This comparison is based on features extracted using the DCT. The basic idea here is to compute the DCT of the normalized face and retain a certain subset of the DCT coefficients as a feature vector Describing this face. This feature vector contains the low-to-mid

frequency DCT coefficients, as these are the ones having the highest variance. To recognize a particular input face, the system compares this faces feature vector to the feature vectors of the database faces using a Euclidean distance nearest-neighbor classifier (Duda and Hart, 1973). If the feature vector of the probe is v and that of a database face is f, then the Euclidean distance between the two is d =_( f0 v0)2 +( f1 v1)2+ where v = [v0 v1 . . . vM1]T f = [ f0 f1 . . . fM1]. (3.2) and M is the number of DCT coefficients retained as features. A match is obtained by minimizing d. Note that this approach computes the DCT on the entire normalized image. This is different from the use of the DCT in the JPEG compression standard (Pennebaker and Mitchell, 1993), in which the DCTis computed on individual subsets of the image. The use of the DCT on individual subsets of an image, as +( fM1 vM1)2. (3.1)

Dept. Of E.C.E

Sri Krishnadevaraya Engg. College,Gooty.

High speed face recognition using DCT and Neural networks

13

in the JPEG standard, for face recognition has been proposed in Shneier and Abdel-Mottaleb (1996) and Eickeler et al. (2000). Also, note that this approach basically assumes no thresholds on d. That is, the system described always assumes that the closest match is the correct match, and no probe is ever rejected as unknown. If a threshold q is defined on d, then the gallery face that minimizes d would only be output as the match when d 1, yet there is only one eigen vector. This is illustrated by [1,1|0,1], a function that tilts the x axis counterclockwise and leaves the y axis alone. The eigen values are 1 and 1, and the eigen vector is 0,1, namely the y axis. The Same Eigen Value Let two eigen vectors have the same eigen value. specifically, let a linear map multiply the vectors v and w by the scaling factor l. By linearity, 3v+4w is also scaled by l. In fact every linear combination of v and w is scaled by l. When a set of vectors has a common eigen value, the entire space spanned by those vectors is an eigen space, with the same eigen value. This is not surprising, since the eigen vectors associated with l are precisely the kernel of the transfoormation defined by the matrix M with l subtracted from the main diagonal. This kernel is a vector space, and so is the eigen space of l. Select a basis b for the eigen space of l. The vectors in b are eigen vectors, with eigen value l, and every eigen vector with eigen value l is spanned by b. Conversely, an eigen vector with some other eigen value lies outside of b. Different Eigen Values Different eigen values always lead to independent eigen spaces. Suppose we have the shortest counterexample. Thus c1x1 + c2x2 + + ck xk = 0. Here x1 through xk are the eigen vectors,

Dept. Of E.C.E

Sri Krishnadevaraya Engg. College,Gooty.

High speed face recognition using DCT and Neural networks

34

and c1 through ck are the coefficients that prove the vectors form a dependent set. Furthermore, the vectors represent at least two different eigen values. Let the first 7 vectors share a common eigen value l. If these vectors are dependent then one of them can be expressed as a linear combination of the other 6. Make this substitution and find a shorter list of dependent eigen vectors that do not all share the same eigen value. The first 6 have eigen value l, and the rest have some other eigen value. Remember, we selected the shortest list, so this is a contradiction. Therefore the eigen vectors associated with any given eigen value are independent. Scale all the coefficients c1 through ck by a common factor s. This does not change the fact that the sum of cixi is still zero. However, other than this scaling factor, we will prove there are no other coefficients that carry the eigen vectors to 0. If there are two independent sets of coefficients that lead to 0, scale them so the first coefficients in each set are equal, then subtract. This gives a shorter linear combination of dependent eigen vectors that yields 0. More than one vector remains, else cjxj = 0, and xj is the 0 vector. We already showed these dependent eigen vectors cannot share a common eigen value, else they would be linearly independent; thus multiple eigen values are represented. This is a shorter list of dependent eigen vectors with multiple eigen values, which is a contradiction. If a set of coefficients carries our eigen vectors to 0, it must be a scale multiple of c1 c2 c3 ck. Now take the sum of cixi and multiply by M on the right. In other words, apply the linear transformation. The image of 0 ought to be 0. Yet each coefficient is effectively multiplied by the eigen value for its eigen vector, and not all eigen values are equal. In particular, not all eigen values are 0.

5.2 Axis of RotationHere is a simple application of eigen vectors. A rigid rotation in 3 space always has an axis of rotation. Let M implement the rotation. The determinant of M, with l subtracted from its main diagonal, gives a cubic polynomial in l, and every cubic has at least one real root. Since lengths are preserved by a rotation, l is 1. If l is -1 we have a reflection. So l = 1, and the space rotates through some angle star, has an axis of rotation. about the eigen vector. That's why every planet, every

Dept. Of E.C.E

Sri Krishnadevaraya Engg. College,Gooty.

High speed face recognition using DCT and Neural networks

35

Matching Algorithm Here you can do Both images are Same Displays the results Match Found or Not Found

Load training set images into database

Calculate mean of all images

Calculate Eigen vectors of the correlation matrix

Calculate the minimized Euclidean distance of test image

Determine if face

Display match is not found

Display Match Found

End

Fig.5.1 Flow chart for finding images are same or notDept. Of E.C.E Sri Krishnadevaraya Engg. College,Gooty.

High speed face recognition using DCT and Neural networks

36

5.3 Outline a typical Face recognition systemModules in face recognition The acquisition module This is the entry point of the face recognition process. It is the module where the face image under consideration is presented to the system. In other words, the user is asked to present a face image to the face recognition system in this module. An acquisition module can request a face image from several different environments: The face image can be an image file that is located on a magnetic disk, it can be captured by a frame grabber and camera or it can be scanned from paper with the help of a scanner. The pre-processing module In this module, by means of early vision techniques, face images are normalized and if desired, they are enhanced to improve the recognition performance of the system. Some or all of the following pre-processing steps may be implemented in a face recognition system: 1. Image size (resolution) normalization: it is usually done to change the acquired image size to a default image size on which the face recognition system operates. 2. Histogram equalization: it is usually done on too dark or too bright images in order to enhance the image quality and to improve face recognition performance. It modifies the dynamic range (contrast range) of the image and as a result, some important facial features become more apparent. 3. Median filtering: for noisy images especially obtained from a camera or from a frame grabber, median filtering can clean the image without loosing information. 4. High-pass filtering: feature extractors that are based on facial outlines may benefit the results that are obtained from an edge detection scheme. High-pass filtering emphasizes the details of an image such as contours, which can dramatically improve edge detection performance. 5. Background removal: in order to deal primarily with facial information itself, face background can be removed. This is especially important for face recognition systems where entire information contained in the image module should be capable of determining the face outline.

Dept. Of E.C.E

Sri Krishnadevaraya Engg. College,Gooty.

High speed face recognition using DCT and Neural networks

37

6. Translational and rotational normalizations: in some cases, it is possible to work on a face image in which the head is somehow shifted or rotated. The head plays the key role in the determination of facial features. Especially for face recognition systems that are based on the frontal views of faces, it may be desirable that the pre-processing module determines and if possible, normalizes the shifts and rotations in the head position. 7. Illumination normalization: face images taken under different illuminations can degrade recognition performance especially for face recognition systems based on the principal component analysis in which entire face information is used for recognition. Hence, normalization is done to account for this. The feature extraction module After performing some pre-processing (if necessary), the normalized face image is presented to the feature extraction module in order to find the key features that are going to be used for classification. In other words, this module is responsible for composing a feature vector that is well enough to represent the face image. The classification module In this module, with the help of a pattern classifier, extracted features of the face image is compared with the ones stored in a face library (or face database). After doing this comparison, face image is classified as either known or unknown. Training set Training sets are used during the learning phase of the face recognition process. The feature extraction and the classification modules adjust their parameters in order to achieve optimum recognition performance by making use of training sets.

5.4 Problems that may occur during Face Recogni tionDue to the dynamic nature of face images, a face recognition system encounters various problems during the recognition process. It is possible to classify a face recognition system as either robust or weak based on its recognition performances under these circumstances.

Dept. Of E.C.E

Sri Krishnadevaraya Engg. College,Gooty.

High speed face recognition using DCT and Neural networks

38

The objectives of a robust face recognition system are given below: 1. Scale invariance: the same face can be presented to the system at different scales. This may happen due to the focal distance between the face and the camera. As this distance gets closer, the face image gets bigger. 2. Shift invariance: the same face can be presented to the system at different perspectives and orientations. For instance, face images of the same person could be taken from frontal and profile views. Besides, head orientation may change due to translations and rotations. 3. Illumination invariance: face images of the same person can be taken under different illumination conditions such as, the position and the strength of the light source can be modified. 4. Emotional expression and detail invariance: face images of the same person can differ in expressions when smiling or laughing. Also, some details such as dark glasses, beards or moustaches can be present. 5. Noise invariance: a robust face recognition system should be insensitive to noise generated by frame grabbers or cameras. Also, it should function under partially occluded images.

Dept. Of E.C.E

Sri Krishnadevaraya Engg. College,Gooty.

High speed face recognition using DCT and Neural networks

39

CHAPTER 6 DEVELOPING TOOLS6.1 MATLAB IntroductionMATLAB is a high performance language for technical computing .It integrates computation visualization and programming in an easy to use environment Mat lab stands for matrix laboratory. It was written originally to provide easy access to matrix software developed by LINPACK (linear system package) and EISPACK (Eigen system package) projects. MATLAB is therefore built on a foundation of sophisticated matrix software in which the basic element is matrix that does not require pre dimensioning Typical uses of MATLAB 1. Math and computation 2. Algorithm development 3. Data acquisition 4. Data analysis ,exploration ands visualization 5. Scientific and engineering graphics The main features of MATLAB 1. Advance algorithm for high performance numerical computation ,especially in the Field matrix algebra 2. A large collection of predefined mathematical functions and the ability to define ones own functions. 3. Two-and three dimensional graphics for plotting and displaying data 4. A complete online help system 5. Powerful, matrix or vector oriented high level programming language for individual applications. 6. Toolboxes available for solving advanced problems in several application areas Features and capabilities of MATLAB

Dept. Of E.C.E

Sri Krishnadevaraya Engg. College,Gooty.

High speed face recognition using DCT and Neural networks

40

MATLAB

MATLAB PROGRAMMING LANGUAGE

User written / Built in functions

Graphics 2-D graphics 3-D graphics

Computation Linear algebra Signal processing

External interface Interface with C and FORTRAN

Tool boxes 1. 2. 3. 4. 5. 6. 7. Signal processing Image processing Control systems Neural Networks Communications Robust control Statistics

Block Diagram

6.2 DIP using MAT LABMATLAB deals with 1. Basic flow control and programming language 2. How to write scripts (main functions) with matlab 3. How to write functions with matlab 4. How to use the debugger 5. How to use the graphical interfaceDept. Of E.C.E Sri Krishnadevaraya Engg. College,Gooty.

High speed face recognition using DCT and Neural networks

41

6. Examples of useful scripts and functions for image processing After learning about matlab we will be able to use matlab as a tool to help us with our maths, electronics, signal & image processing, statistics, neural networks, control and automation. Matlab resources Language: High level matrix/vector language witho Scripts and main programs o Functions o Flow statements (for, while) o Control statements (if, else) o data structures (struct, cells) o input/ouputs (read,write,save) o object oriented programming.

Environmento Command window. o Editor o Debugger o Profiler (evaluate performances)

Mathematical librarieso Vast collection of functions

APIo Call c function from matlab o Call matlab functions from c

Scripts and main programs In matlab, scripts are the equivalent of main programs. The variables declared in a script are visible in the workspace and they can be saved. Scripts can therefore take a lot of memory if you are not careful, especially when dealing with images. To create a script, you will need to start the editor, write your code and run it.

Dept. Of E.C.E

Sri Krishnadevaraya Engg. College,Gooty.

High speed face recognition using DCT and Neural networks

42

6.3 MATLAB functions1.imread: Read images from graphics files. Syntax: A = imread(filename,fmt) [X,map] = imread(filename,fmt) [...] = imread(filename) [...] = imread(...,idx) (TIFF only) [...] = imread(...,ref) (HDF only) [...] = imread(...,'BackgroundColor',BG) (PNG only) [A,map,alpha] = imread(...) (PNG only) Description: A = imread(filename,fmt) reads a grayscale or truecolor image named filename into A. If the file contains a grayscale intensity image, A is a two-dimensional array. If the file contains a truecolor (RGB) image, A is a three-dimensional (m-by-n-by-3) array. [X,map] = imread(filename,fmt) reads the indexed image in filename into X and its associated colormap into map. The colormap values are rescaled to the range [0,1]. A and map are two-dimensional arrays. [...] = imread(filename) attempts to infer the format of the file from its content. filename is a string that specifies the name of the graphics file, and fmt is a string that specifies the format of the file. If the file is not in the current directory or in a directory in the MATLAB path, specify the full pathname for a location on your system. If imread cannot find a file named filename, it looks for a file named filename.fmt. If you do not specify a string for fmt, the toolbox will try to discern the format of the file by checking the file header.

Dept. Of E.C.E

Sri Krishnadevaraya Engg. College,Gooty.

High speed face recognition using DCT and Neural networks

43

6.1 This table lists the possible values for fmt. Format 'bmp' 'hdf' 'jpg' or 'jpeg' 'pcx' `png' 'tif' or 'tiff' 'xwd' File type Windows Bitmap (BMP) Hierarchical Data Format (HDF) Joint Photographic Experts Group (JPEG) Windows Paintbrush (PCX) Portable Network Graphics (PNG) Tagged Image File Format (TIFF) X Windows Dump (XWD)

Special Case Syntax TIFF-Specific Syntax: [...] = imread(...,idx) reads in one image from a multi-image TIFF file. idx is an integer value that specifies the order in which the image appears in the file. For example, if idx is 3, imread reads the third image in the file. If you omit this argument, imread reads the first image in the file. To read all ages of a TIFF file, omit the idx argument. PNG-Specific Syntax: The discussion in this section is only relevant to PNG files that contain transparent pixels. A PNG file does not necessarily contain transparency data. Transparent pixels, when they exist, will be identified by one of two components: a transparency chunk or an alpha channel. (A PNG file can only have one of these components, not both.) The transparency chunk identifies which pixel values will be treated as transparent, e.g., if the value in the transparency chunk of an 8-bit image is 0.5020, all pixels in the image with the color 0.5020 can be displayed as transparent. An alpha channel is an array with the same number of pixels as are in the image, which indicates the transparency status of each corresponding pixel in the image (transparent or nontransparent). Another potential PNG component related toDept. Of E.C.E Sri Krishnadevaraya Engg. College,Gooty.

High speed face recognition using DCT and Neural networks

44

transparency is the background color chunk, which (if present) defines a color value that can be used behind all transparent pixels. This section identifies the default behavior of the toolbox for reading PNG images that contain either a transparency chunk or an alpha channel, and describes how you can override it. HDF-Specific syntax: [...] = imread(...,ref) reads in one image from a multi-image HDF file. ref is an integer value that specifies the reference number used to identify the image. For example, if ref is 12, imread reads the image whose reference number is 12. (Note that in an HDF file the reference numbers do not necessarily correspond to the order of the images in the file. You can use imfinfo to match up image order with reference number.) If you omit this argument, imread reads the first image in the file. . 6.2 This table summarizes the types of images that imread can read Format Variants 1-bit, 4-bit, 8-bit, and 24-bit uncompressed images; 4-bit and 8-bit run-length encoded (RLE) images 8-bit raster image datasets, with or without associated colormap; 24-bit raster image datasets Any baseline JPEG image (8 or 24-bit); JPEG images with some commonly used extensions 1-bit, 8-bit, and 24-bit images Any PNG image, including 1-bit, 2-bit, 4-bit, 8-bit, and 16-bit grayscale images; 8bit and 16-bit indexed images; 24-bit and 48-bit RGB images Any baseline TIFF image, including 1-bit, 8-bit, and 24-bit uncompressed images; 1TIFF bit, 8-bit, 16-bit, and 24-bit images with packbits compression; 1-bit images with CCITT compression; also 16-bit grayscale, 16-bit indexed, and 48-bit RGB images. XWD 1-bit and 8-bit ZPixmaps; XYBitmaps; 1-bit XYPixmaps

BMP

HDF

JPEG

PCX

PNG

Dept. Of E.C.E

Sri Krishnadevaraya Engg. College,Gooty.

High speed face recognition using DCT and Neural networks

45

2.imshow: Display image Syntax imshow(I) imshow(I,[low high]) imshow(RGB) imshow(BW) imshow(X,map) imshow(filename) himage = imshow(...) imshow(..., param1, val1, param2, val2,...) Description imshow(I) displays the grayscale image I. imshow(I,[low high]) displays the grayscale image I, specifying the display range for I in [low high]. The value low (and any value less than low) displays as black; the value high (and any value greater than high) displays as white. Values in between are displayed as intermediate shades of gray, using the default number of gray levels. If you use an empty matrix ([]) for [low high], imshow uses [min(I(:)) max(I(:))]; that is, the minimum value in I is displayed as black, and the maximum value is displayed as white. imshow(RGB) displays the truecolor image RGB. imshow(BW) displays the binary image BW. imshow displays pixels with the value 0 (zero) as black and pixels with the value 1 as white. imshow(X,map) displays the indexed image X with the colormap map. A color map matrix may have any number of rows, but it must have exactly 3 columns. Each row is interpreted as a color, with the first element specifying the intensity of red light, the second green, and the third blue. Color intensity can be specified on the interval 0.0 to 1.0.

Dept. Of E.C.E

Sri Krishnadevaraya Engg. College,Gooty.

High speed face recognition using DCT and Neural networks

46

imshow(filename) displays the image stored in the graphics file filename. The file must contain an image that can be read by imread or dicomread. imshow calls imread or dicomread to read the image from the file, but does not store the image data in the MATLAB workspace. If the file contains multiple images, the first one will be displayed. The file must be in the current directory or on the MATLAB path. Remarks imshow is the toolbox's fundamental image display function, optimizing figure, axes, and image object property settings for image display. imtool provides all the image display capabilities of imshow but also provides access to several other tools for navigating and exploring images, such as the Pixel Region tool, Image Information tool, and the Adjust Contrast tool. imtool presents an integrated environment for displaying images and performing some common image processing tasks. Examples Display an image from a file. X= imread('moon.tif'); imshow(X).

Dept. Of E.C.E

Sri Krishnadevaraya Engg. College,Gooty.

High speed face recognition using DCT and Neural networks

47

6.4 MATLAB Desktop IntroductionWhen you start MATLAB, the MATLAB desktop appears, containing tools (graphical user interfaces) for managing files, variables, and applications associated with MATLAB. The following illustration shows the default desktop. You can customize the arrangement of tools and documents to suit your needs. For more information about the desktop tools .

Dept. Of E.C.E

Sri Krishnadevaraya Engg. College,Gooty.

High speed face recognition using DCT and Neural networks

48

6.5 Implementations1. Arithmetic operations Entering Matrices

The best way for you to get started with MATLAB is to learn how to handle matrices. Start MATLAB and follow along with each example. You can enter matrices into MATLAB in several different ways: Enter an explicit list of elements. Load matrices from external data files. Generate matrices using built-in functions. Create matrices with your own functions in M-files. Start by entering Drers matrix as a list of its elements. You only have to follow a few basic conventions: Separate the elements of a row with blanks or commas. Use a semicolon, to indicate the end of each row. Surround the entire list of elements with square brackets, [ ]. To enter matrix, simply type in the Command Window A = [16 3 2 13; 5 10 11 8; 9 6 7 12; 4 15 14 1] MATLAB displays the matrix you just entered: A =16 3 2 13 5 10 11 8 9 6 7 12 4 15 14 1 This matrix matches the numbers in the engraving. Once you have entered the matrix, it is automatically remembered in the MATLAB workspace. You can refer to it simply as A. Now that you have A in the workspace, sum, transpose, and diag You are probably already aware that the special properties of a magic square have to do with the various ways of summing its elements. If you take the sum along any row or column, orDept. Of E.C.E Sri Krishnadevaraya Engg. College,Gooty.

High speed face recognition using DCT and Neural networks

49

along either of the two main diagonals, you will always get the same number. Let us verify that using MATLAB. The first statement to try is sum(A) MATLAB replies with ans =34 34 34 34 When you do not specify an output variable, MATLAB uses the variable ans, short for answer, to store the results of a calculation. You have computed a row vector containing the sums of the columns of A. Sure enough, each of the columns has the same sum, the magic sum, 34. How about the row sums? MATLAB has a preference for working with the columns of a matrix, so one way to get the row sums is to transpose the matrix, compute the column sums of the transpose, and then transpose the result. For an additional way that avoids the double transpose use the dimension argument for the sum function. MATLAB has two transpose operators. The apostrophe operator (e.g., A') performs a complex conjugate transposition. It flips a matrix about its main diagonal, and also changes the sign of the imaginary component of any complex elements of the matrix. The apostrophe-dot operator (e.g., A'.), transposes without affecting the sign of complex elements. For matrices containing all real elements, the two operators return the same result. So A' produces ans = 16 5 9 4 3 10 6 15 2 11 7 14 13 8 12 1 And sum(A')' produces a column vector containing the row sums ans = 34 34

Dept. Of E.C.E

Sri Krishnadevaraya Engg. College,Gooty.

High speed face recognition using DCT and Neural networks

50

34 34 The sum of the elements on the main diagonal is obtained with the sum and the diag functions: diag(A) produces ans = 16 10 7 1 And sum(diag(A)) produces ans = 34 The other diagonal, the so-called anti diagonal, is not so important Mathematically, so MATLAB does not have a ready-made function for it. But a function originally intended for use in graphics, fliplr, flips a matrix From left to right: Sum (diag(fliplr(A))) ans = 34 You have verified that the matrix in Drers engraving is indeed a magic Square and, in the process, have sampled a few MATLAB matrix operations.

OperatorsExpressions use familiar arithmetic operators and precedence rules. + Addition

Dept. Of E.C.E

Sri Krishnadevaraya Engg. College,Gooty.

High speed face recognition using DCT and Neural networks

51

- Subtraction * Multiplication / Division \ Left division (described in Matrices and Linear Algebra in the MATLAB documentation) . ^ Power ' Complex conjugate transpose ( ) Specify evaluation order Generating Matrices MATLAB provides four functions that generate basic matrices. zeros All zeros ones All ones rand Uniformly distributed random elements randn Normally distributed random elements Here are some examples: Z = zeros(2,4) Z= 0000 0000 F = 5*ones(3,3) F= 555 555

Dept. Of E.C.E

Sri Krishnadevaraya Engg. College,Gooty.

High speed face recognition using DCT and Neural networks

52

555 N = fix(10*rand(1,10)) N= 9264874084 R = randn(4,4) R= 0.6353 0.0860 -0.3210 -1.2316 -0.6014 -2.0046 1.2366 1.0556 0.5512 -0.4931 -0.6313 -0.1132 -1.0998 0.4620 -2.3252 0.3792 M-Files You can create your own matrices using M-files, which are text files containing MATLAB code. Use the MATLAB Editor or another text editor to create a file Containing the same statements you would type at the MATLAB command Line. Save the file under a name that ends in .m.For example, create a file containing these five lines: A = [... 16.0 3.0 2.0 13.0 5.0 10.0 11.0 8.0 9.0 6.0 7.0 12.0 4.0 15.0 14.0 1.0 ]; Store the file under the name magik.m. Then the statement magik reads the file and creates a variable, A, containing our example matrix.

6.6 Graph ComponentsMATLAB displays graphs in a special window known as a figure. To create a graph, you need to define a coordinate system. Therefore every graph is placed within axes, which are contained by the figure. The actual visual representation of the data is achieved with graphicsDept. Of E.C.E Sri Krishnadevaraya Engg. College,Gooty.

High speed face recognition using DCT and Neural networks

53

objects like lines and surfaces. These objects are drawn within the coordinate system defined by the axes, which MATLAB automatically creates specifically to accommodate the range of the data. The actual data is stored as properties of the graphics objects.

Plotting Tools Plotting tools are attached to figures and create an environment for creating Graphs. These tools enable you to do the following: Select from a wide variety of graph types Change the type of graph that represents a variable See and set the properties of graphics objects Annotate graphs with text, arrows, etc. Create and arrange subplots in the figure Drag and drop data into graphs Display the plotting tools from the View menu or by clicking the plotting tools icon in the figure toolbar, as shown in the following picture.

Dept. Of E.C.E

Sri Krishnadevaraya Engg. College,Gooty.

High speed face recognition using DCT and Neural networks

54

Editor/Debugger Use the Editor/Debugger to create and debug M-files, which are programs you write to run MATLAB functions. The Editor/Debugger provides a graphical user interface for text editing, as well as for M-file debugging. To create or edit an M-file use File > New or File > Open, or use the edit function.

Dept. Of E.C.E

Sri Krishnadevaraya Engg. College,Gooty.

High speed face recognition using DCT and Neural networks

55

CHAPTER 7 CONCLUSION AND FUTURE SCOPE7.1 ConclusionHigh information redundancy and strong correlations in face images result in inefficiencies when such images are used directly in recognition tasks. In this project, Discrete Cosine Transforms (DCTs) are used to reduce image information redundancy because only a subset of the transform coefficients are necessary to preserve the most important facial features, such as hair outline, eyes and mouth. We demonstrate experimentally that when DCT coefficients are fed into a backpropagation neural network for classification, high recognition rates can be achieved using only a small proportion (0.19%) of available transform components.

7.2Future scope

Based on energy probability, we propose a new feature extraction method for face recognition. Our method consists of three steps. First, face images are transformed into DCT domain. Second, DCT domain acquired from face image is applied on energy probability for the purpose of dimension reduction of data and optimization of valid information., Third, in order to obtain the most silent and invariant feature of face images, the LDA is applied in the data extracted from the frequency mask that can facilitate the selection of useful DCT frequency bands for image recognition, because not all the bands are useful in classification. At last, it will extract the linear discriminative features by LDA and perform the classification by the nearest neighbor classifier. For the purpose of dimension reduction of data and optimization of valid information, the proposed method has shown better recognition performance than PCA plus LDA and existing DCT method.

Dept. Of E.C.E

Sri Krishnadevaraya Engg. College,Gooty.

High speed face recognition using DCT and Neural networks

56

7.3 References[1] .C. M. Bishop, Neural Networks for Pattern Recognition. Oxford: OxfordUniversity press,1995 [2]. B. Chalmond and S. Girard, Nonlinear modeling of scattered multivariate data and its application to shape change, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 21, no. 5, pp. 422-432, 1999. [3]. R. Chellappa, C. L. Wilson, and S. Sirohey, Human and machine recognition of faces: A survey, Proceedingsof the IEEE, vol. 83, no. 5, pp. 705-740, 1995. [4] C. Christopoulos, J. Bormans, A. Skodras, and J. Cornelis, Efficient computation of the two-dimensional fast cosine transform, in SPIE Hybrid Image and Signal Processing IV, pp. 229-237, 1994. [5]. R. Gonzalez and R. Woods, Digital Image Processing. Reading, MA: Addison-Wesley, 1992. [6]. A. Hyvarinen, Survey on independent component analysis, Neural Computing Surveys, 2, pp. 94-128, 1999. J. Karhunen and J. Joutsensalo, Generalization of principal component analysis, optimization problems and neural networks, Neural Networks, vol.8, no. 4, pp. 549-562, 1995. [7]. M. Kirby and L. Sirovich, Application of the Karhunen-Loeve procedure for the characterization of human faces, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, no. 1, pp. 103-108, 1990. [8]. S. Lawrence, C. Lee Giles, A. Tsoi, and A. Back, Face recognition: A convolutional neural network approach, IEEE Transactions on Neural Networks, vol. 8, no. 1, pp. 98-113,1997. [9]. C. Nebauer, Evaluation of convolutional neural networks for visual recognition, IEEE Transactions on Neural Networks, vol. 9, no. 4, pp. 685-696, 1998. [10]. Z. Pan, R. Adams, and H. Bolouri, Dimensionality reduction of face images using discrete cosine transforms for recognition. submitted to IEEE Conference on Computer Vision and Pattern Recognition, 2000. [11]. F. Samaria, Face Recognition using Hidden Markov Models. PhD thesis, Cambridge University, 1994.

Dept. Of E.C.E

Sri Krishnadevaraya Engg. College,Gooty.

High speed face recognition using DCT and Neural networks

57

[12]. E. Saund, Dimensionality-reduction using connectionist networks, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 11, no. 3, pp. 304-314, 1989. [13].D. Valentin, H. Abdi, A. OToole, and G. Cottrell, Connectionist models of face processing: A survey, Pattern Recognition, vol. 27, pp. 1209-1230, 1994. [14]. M. S. Bartlett et al. Face recognition by independent component analysis. IEEE Trans. on Neural Networks, 13(6):1450 1454, 2002. [15]. P. N. Belhumeur et al. Eigenfaces vs. fisherfaces: Recognition using class specific linear projection. IEEE Trans. on PAMI, 19(7):711 720, 1997. [16]. R. Gottumukkal and V. K. Asari. An improved face recognition technique based on modular PCA approach. Pattern Recognition Letters, 25(4), 2004. [17]. Z. M. Hafed and M. D. Levine. Face recognition using the discrete cosine transform. International Journal of Computer Vision, 43(3), 2001. [18]. B. Heisele et al. Face recognition with support vector machines: Global versus component-based approach. In ICCV, pages 688 694, 2001. [19]. T. Kanade. Picture processing by computer complex and recognition of human faces. Technical report, Kyoto Univ., Dept. Inform. Sci., 1973.

Dept. Of E.C.E

Sri Krishnadevaraya Engg. College,Gooty.