Vous êtes sur la page 1sur 6

2014 First International Conference on Computational Systems and Communications (ICCSC) | 17-18 December 2014 | Trivandrum

A Novel Approach For Multimodal Face Recognition


System Based on Modular PCA
Parvathy S. B.1
Department of ECE
LBSITW , Poojappura
Trivandrum,India
1
parvathysb@gmail.com

Naveen S.2
Department of ECE
LBSITW,Poojappura
Trivandrum, India
2
nsnair_1176@yahoo.com

Abstract An efficient face recognition system should


recognize faces in different views and poses. The
efficiency of a human face recognition system
depends on the capability of face recognition in
presence of changes in the appearance of face due to
expression ,pose and illumination. A novel algorithm
which utilizes the combination of texture and depth
information based on Modular PCA to overcome the
problem of pose variation and illumination change
for face recognition is proposed. The system has
combined 2D and 3D systems in the feature level
which presents higher performance in contrast with
methods which utilizes either 2D or 3D system
separately. A multimodal face recognition based on
Modular PCA when compared with conventional
PCA algorithm has an improved recognition rate for
face images with large variations in illumination and
facial expression. The proposed algorithm is tested
with FRAV3D database that has faces with pose
variation and illumination changes. Recognition
rates from experimental results show the superiority
of Modular PCA over conventional PCA methods in
tackling face images with different pose variations
and changes in illuminations. The proposed
algorithm shows a recognition rate of 86% that is
achieved in fusion experiment.
Keywords Face Recognition,
information ,PCA ,Modular PCA

Texture

image,

Depth

I. INTRODUCTION
Face Recognition is one of the attractive methods in biometric
systems, because face recognition provides a good trade-off
between reliability and social acceptance. There are five major
problems in face recognition which affect performance of the
system: 1) Illumination variations 2) Pose changes 3)
expression variations 4) time delay 5) occlusions [1].One of
the challenges is pose variation. Algorithms presented for
pose variation are divided into two main categories in relation
to their type of gallery images [1]. First, Multi-view face

R.S Moni 3
Department of ECE
Marian Engineering College
Trivandrum,India
3
moni2006rs@gmail.com

recognition system (FRS) which requires several poses for


every subject in gallery. Another category identifies probe
faces which have different poses with gallery face. In real
situations, a frontal face in gallery is available but probe faces
have unpredicted pose variations so, the system must be
robust to these situations. In [2], after a pose estimation step, a
template-based on correlation-matching scheme is used to
align the probe image with the candidate pose in the gallery
images. In [3], a framework for recognizing faces with large
3D pose variations has been presented which applies a
parametric linear subspace model for representing each known
person in the gallery. In [4], a 3D face recognition method has
been proposed. This method segments the convex regions in
the range images based on the sign of the mean and the
Gaussian curvatures. In the 3D face recognition method
proposed by [5], an iterative closest point (ICP) approach was
used to match face surfaces. Face recognition algorithms
which combine 2D and 3D data have been recently proposed.
In [6], it is shown that combining 2D and 3D results by using
a simple weighting scheme, outperforms either 2D or 3D.
Texture information is more efficient than depth information
for face recognition. However texture information is more
sensitive to illumination and poses variation and the
recognition mostly fails in environment with illumination
changes. Some algorithms utilize both depth and texture
information to enhance the accuracy of face recognition
algorithms. However 3D information are only used for the
estimation of face rotation. The rotation compensated 2D
images are then used for face recognition.
The main idea of this work is to improve the recognition rate
of face images subject to variations in face orientation, head
pose, illumination and so on. Principal component analysis
(PCA) method has been accepted globally as a popular
technique in facial image recognition. But the said technique
is not highly accurate when the illumination, orientation and
pose of the facial images vary considerably. Later on Modular
PCA, which was proposed by Gottumukkal and Asari [7], as
an extension of the conventional PCA method. The

978-1-4799-6013-2/14/$31.00 2014 IEEE


127

2014 First International Conference on Computational Systems and Communications (ICCSC) | 17-18 December 2014 | Trivandrum

recognition rate is observed to be increasinng with this method


while complexity, memory utilization is saiid to be reduced to
a noticeable amount. In this paper a variatiion of the modular
PCA is proposed where the scheme is applied both in 2D and
3D. Final result is obtained by intelliggent score fusion
techniques . In the traditional PCA methhod the entire face
image is considered, hence large variatiion in pose[8] or
illumination[9] will affect the recognitionn rate profoundly.
Since in the case of modular PCA methodd the original face
image is divided into sub-images, the variations
v
in pose,
orientation or illumination in the image willl affect only some
of the sub-images, hence we expect this meethod to have better
recognition rate than the conventional PCA .

face recognition. Fig:2 shows thhe general block diagram of the


method. In the proposed methodd, 3D face image in point cloud
format is taken and it is projeected into 2D plane. Modular
PCA is then applied to both teexture and depth images. This
algorithm provides features that is robust to pose and
illumination. The same algoritthm is applied to both texture
and depth gallery also. Then tem
mplate matching is done using
the extracted features. Finallyy the matching scores can be
fused by means of a weight parameter
p
which is obtained
during training process.

II.FACE DATABASE
Currently one of the available perfect face databases is
FRAV3D [10] that contain face images for 106 different
persons in 16 different poses. The databbase contains face
images with both texture and informatioon along with 3D
geometrical face information in VRML forrmat.. Fig. 1 shows
different poses for a typical person in FRAV3D
F
database.
Pose number 1 to 4 are frontal view from thhe face with closed
and open eyes. In pose numbers 5 to 8, thee face has different
rotation around Y axis. Pose number 9 annd 10 has rotation
around Z axis and pose number 13 and 14 has
h rotation around
X axis. Pose 11 and 12 are smiling face andd the face with open
mouth respectively. Pose number 15 and 16 are frontal views
from face with different lighting conditions..

Fig. 2. General block diagram of our system for multimodal


face recognition.
TURE
A. MODULAR PCA IN TEXT

Fig.1. Different poses for a person in FRAV


V3D face database

III. Proposed method


We used the combination of two-dim
mensional (texture)
information and three-dimensional (depthh) information for

The PCA based face recognitioon method is not very effective


under the conditions of varying pose and illumination, since it
considers the global informattion of each face image and
represents them with a set of weights.
w
Under these conditions
the weight vectors will vary considerably
c
from the weight
vectors of the images with normal
n
pose and illumination,
hence it is difficult to identifyy them correctly. On the other
hand if the face images were divvided into smaller regions and
the weight vectors are computted for each of these regions,
then the weights will be morre representative of the local
information of the face. When there is a variation in the pose
or illumination, only some of the
t face regions will vary and
rest of the regions will remain the
t same as the face regions of
a normal image. Hence weigghts of the face regions not
affected by varying pose and illlumination will closely match
,with the weights of the same individuals
i
face regions under

128

2014 First International Conference on Computational Systems and Communications (ICCSC) | 17-18 December 2014 | Trivandrum

normal conditions. Therefore it is expected that improved


recognition rates could be obtained by folloowing the modular
PCA approach.
The algorithm for modular PCA is as follow
ws:
Step1: M is the number of training images, N is the number of
sub-images (each image in the training sett is divided into N
smaller images). Each sub-image is represennted as:


min (Dp)<i for a particular vallue of p, the corresponding face


class in the training set is the closest one to the test image.
Hence the test image is recognnized as belonging to the pth
face class.

(1)

where 1 i M, 1 j N, 1 m , n L/

( as the size of each sub-image is /(N/2))


Step2: Average image is computed as:

v
level for a texture
Fig..3. Image segmentation in virtual
image.

(2)

TH
B.MODULAR PCA IN DEPT

where 1 i M, 1 j N
Step3: Normalize each training sub-image as:
a
 

Here we use depth images for feature extraction. The


algorithm is same as above.

where 1 i M, 1 j N
Step4: The covariance matrix is computed as:
a

(3)

where 1 i M, 1 j N
Step5: Eigenvectors C are computed that are
a associated with
M largest eigenvalues
Step6: Image data is reconstructed.
Step7: Weights are computed from the eigenvectors from the
training sub-images as well as test subim
mages. For training
sub-images:
(4)
 
K takes the values 1, 2,,M, n varies froom 1 to , being
the number of images per individual, and p varies from 1 to P,
P being the number of individuals in the trraining set. For test
sub-images:
(5)
  
Step8: Mean weight set of each class in the training set is
computed from the weight sets of the class..


 

(6)

Step9: The minimum distance is computed.




Fig.4. Depth images of differeent persons in FRAV3D face


database

(7)
(8)

129

2014 First International Conference on Computational Systems and Communications (ICCSC) | 17-18 December 2014 | Trivandrum

Fig..5. Image segmentation in virtual level for a depth image.


C. FUSION OF TEXTURE AND DEPTH
2D and 3D face information has been combined in the feature
level. Modular PCA outputs of 2D and 3D information of the
same face image is intelligently combined to get a unique
entity. Combination of the 2D and 3D systems has been
performed by the weight factor .   and  are the
euclidean norm resulting from the 2D and 3D systems. The
value of ' ' is obtained by a training process. We start from
a small value of ' ' and gradually increase the value so as to
get maximum recognition rate.

Table 1. Variation in Recognition rates and false alarms using


texture only, depth only and fusion of texture and depth for
Modular PCA

(9)

III. RESULTS AND DISCUSSION


A. Experimental Results
The proposed algorithm is implemented using MATLAB and
tested using FRAV3D face database. As mentioned before the
database contains face images for 106 different persons in 16
different poses. We used one of the poses (frontal view) for
testing and other 15 poses for tests. Therefore the database
contains 106 face images, and test images contain
100x15=1500 samples. We tested the proposed algorithm with
different values of . Table 1 shows the True
acceptance rate (TAR),True rejection rate (TRR), False
acceptance rate (FAR)) and False rejection rates (FRR) for the
proposed method. The definition of the evaluation parameters
Recognition rates (RR) and false alarm (FA) in Table 1, are as
follows

RR= number of correctly recognized faces
total no. of test faces
FA= number of falsely recognized faces
total no. of test images

Table 2. Variation in Recognition rates and false alarms using


texture only, depth only and fusion of texture and depth for
PCA

86
82.13

74.53
71.45

Texture+ Depth

Texture

Depth

PCA

Fig 6: Recognition rate for different face recognition algorithms.


From left to right: 1- Proposed algorithm, 2 Proposed algorithm
using only texture,3-Proposed algorithm using only depth, 4-PCA

130

2014 First International Conference on Computational Systems and Communications (ICCSC) | 17-18 December 2014 | Trivandrum

Fig.7. Variation in RR with no. of testing samples and no. of


training image considering texture information for PCA.

Fig.10. Variation in RR with no. of samples and no. of


training image considering texture information for modular
PCA.

Fig.8. Variation in RR with no. of samples and no. of training


image considering depth information for PCA.

Fig.11. Variation in RR with no. of samples and no. of


training image considering depth information for Modular
PCA.

Fig.9. Variation in RR with no. of samples and no. of training


image considering depth and texture information for PCA.

Fig.12.Variation in RR with no. of samples and no. of training


image considering texture and depth information for modular
PCA

131

2014 First International Conference on Computational Systems and Communications (ICCSC) | 17-18 December 2014 | Trivandrum

MODULAR PCA
Recognition rate

PCA
Recognition rate
Texture

depth

Fusion
of
texture
and
depth

Texture

Depth

Fusion of
texture and
depth

0.701
0.724
0.743
0.689

0.672
0.689
0.712
0.641

1
25
75
15
0

0.6982
0.7145
0.7356
0.6623

0.7615
0.8213
0.8434
0.7314

0.712
0.745
0.762
0.701

1
25
75
150

0.7824
0.8456
0.8673
0.7123

Table 3:Recognition rates for (a) PCA and (b) Modular PCA for
various values of w.

As is shown in Table 3, the best results for the proposed


algorithm is obtained for =75. To compare the results of the
proposed algorithm with other methods, we implemented the
face recognition algorithm using conventional PCA in
texture ,depth and fusion of two. Fig. 6 shows the recognition
rate using FRAV3D database for Modular PCA and PCA. For
comparison, the results of recognition using proposed
algorithm using only texture or depth information is also
shown in this figure. As is shown in this figure the results of
the proposed algorithm is better than the existing methods.

[2] D.J. Beymer, Face recognition under varying pose, A. I.


Memo No.1461, December 1993.
[3] K. Okada, C. vonder Malsburg, Pose-invariant face
recognition with parametric linear subspaces,. In: Fifth IEEE
Internat. 2002
[4] J.C. Lee, E. Milios,Matching range images of human
faces, Int. Conf. Comp. Vis., pp. 722726, 1990.
[5] G. Medioni, R. Waupotitsch, Face recognition and
modeling in 3D,in: Proceedings of IEEE International
Workshop on Analysis and Modeling of Faces and Gestures,
pp. 232236, 2003.
[6] K. Chang, K. Bowyer, P. Flynn.An evaluation of multimodal 2D + 3D faces biometrics, IEEE Trans. Pattern Anal.
Machine Intell., 2004.
[7] R. Gottumukkal, V. K. Asari, 2004. An improved face
recognition technique based on modular PCA approach.
Pattern Recognition Letters 25 (2004) 429 436
[8] D. Beymer, Face recognition under varying pose, Proc.
of 23rd Image understanding Workshop, vol.2, pp. 837-842,
1994.
[9] S. Aly, A. Sagheer, N. Tsuruta, R. Taniguchi, e Face
recognition across illumination, Artif Life Robotics (2008),
12:33-37, DOI 10.1007/s10015-007-0437-9.
[10] http://www.frav.es/research/facerecognition/FRAV3D

B .Identification Results
In this step, system was examined by faces which have pose
variation in right, left, up and down directions. System
compares the recognition rates for the multimodal system
using Modular PCA and conventional PCA is depicted in Fig.
6-11. As shown in figure, it is obvious that multimodal system
using Modular PCA has better performance than using
conventional PCA.
IV.CONCLUSION
A multimodal face recognition based on Modular PCA is
proposed in this paper. The system combines 2D FRS and 3D
FRS in the decision level and has shown a higher performance
than systems with separate FRS. In the present study, a
sufficient investigation is done on pose variation problem in
face recognition. We tested the proposed algorithm with
FRAV3D database and result showed that the proposed
algorithm has promising results.
V.REFERENCES
[1] A. F. Abate, M. Nappi, D. Riccio, G. Sabatino, 2D and
3D face recognition: A survey, Pattern Recognition Letters,
pp. 18851906,2007.

132

Vous aimerez peut-être aussi