Face methods, which was developed by M.

Face Recognition

Muhammad Omer,
Mohsin Abbas

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!

order now

[email protected],
[email protected]

University Of Management
And Technology Lahore

Abstract— Eigenfaces
approach for face recognition is implemented as our final project. Face
recognition has been an active area of research with numerous applications
since late 1980s. Eigenface approach is one of the earliest appearance-based
face recognition methods, which was developed by M. Turk and A. Pentland 1 in
1991. This method utilizes the idea of the principal component analysis and
decomposes face images into a small set of characteristic feature images called
eigenfaces. Recognition is performed by projecting a new face onto a low
dimensional linear “face space” defined by the eigenfaces, followed by
computing the distance between the resultant position in the face space and
those of known face classes. A number of experiments were done to evaluate the
performance of the face recognition system we have developed. The results
demonstrate that the eigenface approach is quite robust to head/face
orientation, but sensitive to scale and illumination. At the end of the report,
a couple of ways are suggested to improve the recognition rate. The report is
organized as follows: the first part provides an overview of face recognition
algorithms; the second part states the theory of the eigenfaces approach for
face recognition; Part III focuses on implementation issues, such as system
structure design, interface and use of each functional block, etc.; in Part IV,
a number of experiment results are demonstrated for evaluating the system’s
performance under different circumstances; Finally, a conclusion is drawn based
on the experiment results, and a couple of possible improvements are suggested.

Keywords—component, formatting, style, styling, insert (key words)

I.     Introduction

The face plays a major role in our
social intercourse in conveying identity and emotion. The human ability to
recognize faces is remarkable. We can recognize thousands of faces learned
throughout our lifetime and identify familiar faces at a glance even after
years of separation. The skill is quite robust, despite large changes in the
visual stimulus due to viewing conditions, expression, aging, and distractions
such as glasses or changes in hairstyle.

Computational models of faces have been an active area of
research since late 1980s, for they can contribute not
only to theoretical insights but also to practical applications, such as
criminal identification, security systems, image and film processing, and
human-computer interaction, etc. However, developing a computational model of
face recognition is quite difficult, because faces are complex,
multidimensional, and subject to change over time. 

Generally, there are three phases for
face recognition, mainly face representation, face detection, and face

Face representation is the first task, that is, how to model a face. The way to represent a face determines the
successive algorithms of detection and identification. For the entry-level
recognition (that is, to determine whether or not the given image represents a
face), a face category should be characterized by generic properties of all
faces; and for the subordinate-level recognition (in other words, which face
class the new face belongs to), detailed features of eyes, nose, and mouth have
to be assigned to each individual face. There are a variety of approaches for
face representation, which can be roughly classified into three categories:
template-based, feature-based, and appearance-based.

The simplest template-matching
approaches represent a whole face using a single template, i.e., a 2-D array of
intensity, which is usually an edge map of the original face image. In a more
complex way of template-matching, multiple templates may be used for each face
to account for recognition from different viewpoints. Another important
variation is to employ a set of smaller facial feature templates that
correspond to eyes, nose, and mouth, for a single viewpoint. The most
attractive advantage of template-matching is the simplicity, however, it
suffers from large memory requirement and inefficient matching. In feature-based
approaches, geometric features, such as position and width of eyes, nose, and
mouth, eyebrow’s thickness and arches, face breadth, or invariant moments, are
extracted to represent a face. Feature-based approaches have smaller memory
requirement and a higher recognition speed than template-based ones do. They
are particularly useful for face scale normalization and 3D head model-based
pose estimation. However, perfect extraction of features is shown to be
difficult in implementation 5. The idea of appearance-based approaches
is to project face images onto a linear subspace of low dimensions. Such a
subspace is first constructed by principal component analysis on a set of
training images, with eigenfaces as its eigenvectors. Later, the concept of
eigenfaces were extended to eigenfeatures, such as eigeneyes, eigenmouth, etc.
for the detection of facial features 6. More recently, fisherface space 7
and illumination subspace 8 have been proposed for dealing with recognition
under varying illumination.

identification is
performed at the subordinate-level. At this stage, a new face is compared to
face models stored in a database and then classified to a known individual if a
correspondence is found. The performance of face identification is affected by
several factors: scale, pose, illumination, facial expression, and disguise. Disguise
is another problem encountered by face recognition in practice. Glasses,
hairstyle, and makeup all change the appearance of a face. Most research work
so far has only addressed the problem of glasses 71.

II.    Calculating eigenfaces

Let a face
image G(x,y) be a two-dimensional N by N array
of intensity values. An image may also be considered as a vector of dimension , so that a typical image of size 256 by 256 becomes a vector
of dimension 65,536, or equivalently, a point in 65,536-dimensional space. An
ensemble of images, then, maps to a collection of points in this huge space.

Images of
faces, being similar in overall configuration, will not be randomly distributed
in this huge image space and thus can be described by a relatively low
dimensional subspace. The main idea of the principal component analysis is to
find the vector that best account for the distribution of face images within
the entire image space. These vectors define the subspace of face images, which
we call “face space”. Each vector is of length , describes an N by N image, and is a linear
combination of the original face images. Because these vectors are the
eigenvectors of the covariance matrix corresponding to the original face
images, and because they are face-like in appearance, they are referred to as

Let the
training set of face images be , , , …, . The average face of the set if defined by . Each face differs from the average by the vector . An example training set is shown in Figure 1a, with the
average face shown in Figure 1b. This set of very large vectors is then
subject to principal component analysis, which seeks a set of M
orthonormal vectors, , which best describes the distribution of the data. The kth
vector, is chosen such that


is a maximum,
subject to


The vectors and scalars are the eigenvectors and eigenvalues, respectively, of the
covariance matrix


where the
matrix . The matrix C, however, is  by , and determining the  eigenvectors and
eigenvalues is an intractable task for typical image sizes. A computationally
feasible method is needed to find these eigenvectors.

If the number
of data points in the image space is less than the dimension of the space (), there will be only , rather than , meaningful eigenvectors (the remaining eigenvectors will
have associated eigenvalues of zero). Fortunately, we can solve for the -dimensional eigenvectors in this case by first solving for
the eigenvectors of and M by M matrix—e.g., solving a 16 x 16
matrix rather than a 16,384 x 16,384 matrix—and then taking appropriate linear
combinations of the face images . Consider the eigenvectors of such that


both sides by A, we have


from which we
see that are the eigenvectors of .

Following this
analysis, we construct the M by M matrix , where , and find the M eigenvectors of L. These vectors determine linear combinations of
the M training set face images to form the eigenfaces :


With this
analysis the calculations are greatly reduced, from the order of the number of
pixels in the images () to the order of the number of images in the training set (M).
In practice, the training set of face images will be relatively small (), and the calculations become quite manageable. The
associated eigenvalues allow us to rank the eigenvectors according to their
usefulness in characterizeing the variation among the images.

III.   Procedure

The eigenfaces
approach for face recognition is summarized as follows:

Collect a set of characteristic face images of the
known individuals. This set should include a number of images for each person,
with some variation in expression and in the lighting (say four images of ten
people, so M=40).

Calculate the (40 x 40) matrix L, find its
eigenvectors and eigenvalues, and choose the M’ eigenvectors with the
highest associated eigenvalues (let M’=10 in this example).

Combine the normalized training set of images
according to Eq. (6) to produce the (M’=10) eigenfaces .

For each known individual, calculate the class
vector  by averaging the
eigenface pattern vectors  from Eq. (8)
calculated from the original (four) images of the individual. Choose a
threshold that defines the maximum allowable distance from any face
class, and a threshold  that defines the
maximum allowable distance from face space according to Eq. (9).

For each new face image to be identified,
calculate its pattern vector , the distance  to each known class,
and the distance  to face space. If the
minimum distance  and the distance , classify the input face as the individual associated with
class vector . If the minimum distance  but , then the image may be classified as “unknown”, and
optionally used to begin a new face class.

If the new image is classified as a known
individual, this image may be added to the original set of familiar face
images, and the eigenfaces may be recalculated (steps 1-4). This gives the
opportunity to modify the face space as the system encounters more instances of
known faces.

IV.   Experiment Results

face recognition system was tested using a set of face images downloaded from
MIT Media Lab server 11. All the training and testing images are
grayscale images of size 120×128. There are 16 persons in the face image
database, each having 27 distinct pictures taken under different conditions (illuminance,
head tilt, and head scale).

training images are chosen to be those of full
head scale, with head-on lighting, and upright head tilt. The initial training
set consists of 12 face images of 12 individuals, i.e. one image for one
individual (M=12). These training images are shown in Figure 3a. Figure 3b is
the average image of the training set. Authors and Affiliations

Figure 3a. Initial Training Images

Figure 3b. Average Face of Initial Training Set

After principal
component analysis, M’=11 eigenfaces are constructed based on the M=12
training images. The eigenfaces are demonstrated in Figure 4. The associated
eigenvalues of these eigenfaces are 119.1, 135.0, 173.9, 197.3, 320.3, 363.6,
479.8, 550.0, 672.8, 843.5, 1281.2, in order. The eigenvalues determine the
“usefulness” of these eigenfaces. We keep all the eigenfaces with nonzero
eigenvalues in our experiment, since the size of our training set is small.
However, to make the system cheaper and quicker, we can ignore those eigenfaces
with relatively small eigenvalues.



Figure 4. Eigenfaces

The performance
of the eigenfaces approach under different conditions is studied as follows.

Recognition with
different head tilts:

The robustness of
the eigenfaces recognition algorithm to head tilt is studied by testing 2 face
images of each person that is in the training set, with different head
tilts—either left-oriented or right-oriented, as shown in Figure 5.


Figure 5. Training image and test images with
different head tilts.

a. training image; b. test image 1; c. test image

If the system
correctly relates the test image with its correspondence in the training set,
we say it conducts a true-positive identification (Figures 6 and 7); if
the system relates the test image with a wrong person (Figure 8), or if the
test image is from an unknown individual while the system recognizes it as one
of the persons in the database, a false-positive identification is
performed; if the system identifies the test image as unknown while there does
exist a correspondence between the test image and one of the training images,
the system conducts a false-negative detection.

The experiment
results are illustrated in the Table 1:

Recognition with different
head tilts

Number of
test images


Number of
true-positive identifications


Number of
false-positive identifications


Number of
false-negative identifications


Figure 6 (irfan). Recognition with
different head tilts—success!

a. test image 1; b. test image 2; c. training

Figure 7 (david). Recognition with
different head tilts—success!

a. test image; b. training image

Figure 8 (foof). Recognition with different
head tilts—false!

a. test image 1; b. training image (irfan)
returned by the face recognition system;

c. test image 2; d. training image (stephen)
returned by the program

Recognition with varying illuminance:

training image (with head-on lighting) has two corresponding test images—one
with light moved by 45 degrees and the other with light moved by 90 degrees.
Other conditions, such as head scale and tilt, remain the same as in the
training image. The experiment results are shown in Table 2.

TABLE II.           
Recognition with varying

Number of
test images


Number of
true-positive identifications


Number of
false-positive identifications


Number of
false-negative identifications



Figure 9 shows
the difference between the training image and test images.

Figure 9. Training image and test images with
varying illuminance.

a. training image; b. test image 1: light moved by
45 degrees; c. test image 2: light moved by 90 degrees

A true-positive
example is demonstrated in Figure 10, and a false-negative one is shown in
Figure 11.

Figure 10 (stan). Recognition with varying

a. test image, light moved by 45 degrees; b.
training image, head-on lighting

Figure 11 (ming). Recognition with varying

a. test image, light moved by 90 degrees; b.
training image (foof) returned by the system, head-on lighting

Recognition with varying head scale:

Each training
image (with full head scale) has two corresponding test images—one with a
medium head scale and the other with a small one, as shown in Figure 12. Other
conditions, such as lighting and head tilt, remain the same as in the training
image. The experiment results are shown in Table 3.

TABLE III.          
Recognition with varying
head scale:

Number of
test images


Number of
true-positive identifications


Number of
false-positive identifications


Number of
false-negative identifications


Figure 12. Training image and test images with
varying head scale.

a. training image; b. test image 1: medium head
scale; c. test image 2: small head scale

Figures 13 and
14 illustrate a true-positive example and a false-positive one respectively.

Figure 13 (stan). Recognition with varying head

a. test image 1, medium scale; b. test image 2, small
scale; c. training image, full scale

Figure 14 (pascal). Recognition with varying head

a. test image, medium scale; b. training image (robert),
full scale

result summary:

From the experiments performed, a fairly
good recognition rate (17/24) is obtained with varying illuminance, an
acceptable rate (11/24) with different head tilts, and a poor one (7/24)
with varying head scale. However, it is a long way to go before we can
confidently draw a conclusion on the roughness/sensitivity of the eigenfaces
recognition approach to those conditions. Large case study needs carrying out
in the sense that: (1) a large training set is required, which consists of a large
group of people, each having several face images in the database; (2) numerous
tests are necessary, with face images of people who are or aren’t in the
database; (3) how does the system perform under combinations of condition
changes, e.g. simultaneous changes in head tilt and illuminance, etc.

Thresholding issue is not addressed in
1. Yet, it does affect the performance of the algorithm. Larger threshold
value leads to lower false-negative rate, but higher false-positive rate; and
vice versa. In other words, a good choice of threshold value could well balance
false-negative and false-positive rates, thus maximize good recognition rate.

V.    Conclusion

eigenfaces-based face recognition approach was implemented in Python. This
method represents a face by projecting original images onto a low-dimensional
linear subspace— ‘face space’, defined by eigenfaces. A new face is compared to
known face classes by computing the distance between their projections onto
face space. This approach was tested on a number of face images downloaded from
11. Fairly good recognition results were obtained.

One of the
major advantages of eigenfaces recognition approach is the ease of
implementation. Futhermore, no knowledge of geometry or specific feature of the
face is required; and only a small amount of work is needed regarding
preprocessing for any type of face images.

However, a few
limitations are demonstrated as well. First, the algorithm is sensitive to head
scale. Second, it is applicable only to front views. Third, as is addressed in
1 and many other face recognitions related literatures, it demonstrates good
performance only under controlled background, and may fail in natural scenes.


To improve the
performance of the eigenface recognition approach, a couple of things can be

To reduce the false-positive rate, we can make the
system return a number of candidates from the existing face classes instead of
a single face class. And the remaining work is left to human.

(2) Regarding
the pattern vector representing a face class, we can make each face class
consist of several pattern vectors, each constructed from a face image of the
same individual under a certain condition, rather than taking the average of
these vectors to represent the face class.


1      “Eigenfaces for recognition”, M. Turk and A. Pentland, Journal
of Cognitive Neuroscience, vol.3, No.1, 1991

2      “Face recognition using eigenfaces”, M. Turk and A. Pentland, Proc. IEEE Conf. on Computer Vision and
Pattern Recognition,
pages 586-591, 1991

3      “Face recognition for smart environments”, A. Pentland and T.
Choudhury, Computer, Vol.33 Iss.2, Feb. 2000

4      “Face recognition:
Features versus templates”, R. Brunelli and T. Poggio, IEEE
Trans. Pattern Analysis and Machine Intelligence, 15(10): 1042-1052,

5      “Human and machine
recognition of faces: A survey”, R. Chellappa, C. L. Wilson, and
S. Sirohey, Proc. of IEEE, volume 83, pages 705-740, 1995

6      “Eigenfaces vs.
fisherfaces: Recognition using class specific linear projection”, P. N.
Belhumeur, J. P. Hespanha, and D. J. Kriegman, IEEE Trans.
Pattern Analysis and Machine Intelligence, 19(7):711-720, 1997

7      “Illumination cones
for recognition under variable lighting: Faces”, A. S. Georghiades,
D. J. Kriegman, and P. N. Belhumeur, Proc. IEEE Conf. on Computer
Vision and Pattern Recognition, pages 52-59, 1998

8      “Human face
segmentation and identification”, S. A. Sirohey, Technical Report
CAR-TR-695, Center for Automation Research, University of Maryland, College
Park, MD, 1993

9      “Automatic
recognition and analysis of human faces and facial expressions: A survey”,
A. Samal and P. A. Iyengar, Pattern Recognition, 25(1):
65-77, 1992

10   “Low dimensional
procedure for the characterization of human faces”, Sirovich, L. and Kirby, M, Journal
of the Optical Society of America A, 4(3), 519-524

11   ftp://whitechapel.media.mit.edu/pub/images/