## Friday, 13 December 2013

### MATLAB Implementation of Face recognition with Principal Components Analysis (PCA).

Face recognition has been very important issue in computer vision and pattern recognition over the last several decades. One difficulty in face recognition is how to handle the variations in the expression, pose and illumination when only a limited number of training samples are available. In this assignment Principal Component Analysis (PCA) is proposed for facial expression detection.
Fig: Basic Methodology of Face Recognition.
The proposed method was carried out by taking the picture database. The database was obtained with several photographs of a particular person at different expressions. These expressions can be classified into some discrete classes like happy, anger, disgust, sad and neutral. Absence of any expression is the “neutral” expression. The database is kept in the train folder which contains a particular person having all his/her photographs.

Fig: Face with Various Countable features.
The Principal Components Analysis (PCA) is a way of identifying patterns in data, and expressing the data in such a way as to highlight their similarities and differences. PCA is a powerful tool for analyzing data. The other main advantage of PCA is that to found these patterns in the data, and to compress the data, i.e. by reducing the number of dimensions, without much loss of information. This technique used in image compression, as we will see in a later section.

The case when there is a strong correlation between observed variables. The first principal component is the linear combination of the original dimensions that has the maximum variance; the nth principal component is the linear combination with the highest variance, subject to being orthogonal to the n -1 first principal component. Such features may or may not be directly related to face features such as eyes, nose, lips, and hair.  A simple approach to extracting the information contained in an image of face is to somehow capture the variation in a collection of images, independent of any judgment of features, and use this information to encode and compare individual face images.

These eigenvectors can be thought of as a set of features that together characterize the variation between face images. Each image location contributes more or less of each Eigen vector, so that we can display the eigenvector as a sort of ghostly face which we call an Eigen face. Each individual face can be represented exactly in terms of a linear combination of the Eigen faces. Each face can also be approximated using only the "best" Eigen faces-those that have the largest eigenvalues and which therefore account for the most variance within the set of face images. The best M Eigen faces span an M-Dimensional subspace- "face space" – of all possible images.

This approach of expression detection involves the following initialization operations:
1. Acquire the initial set of face images (the training set).
2. Calculate the Eigen faces from the training set, keeping only the M images that correspond to the highest eigenvalues. These M images define the face space. As new faces are experienced; the Eigen faces can be up-dated or recalculated.
3. Calculate the corresponding distribution in M-dimensional weight space for each known individual, by projecting his or her face images onto the "face space".