Académique Documents
Professionnel Documents
Culture Documents
8
1
[l (3)
128 B
o 1
Face detection of 2D facial image
The processes of face detection efectively refect image
low-level features like edges, peaks, valleys, and ridges,
which is equal to enhancing key facial element information
such as nose, eyes and mouth plus local characteristics like
dimples, melanotic nevus and scars. They not only preserve
global facial information, but also enhance local
characteristic. When the pose, expression and position of a
V3-33
2010 International Confrence on Educational and Information Technolog (ICEIT 2010)
face change, local chages are less than global changes,
resulting are ver robust feature detection.
C Feature Points Detection
Automatic facial feature detection is a difcult but a key
task for many practical applications of face image analysis.
In this paper, invisible points under occlusions or undetected
points are estimated through the global shape and texture
constraints using Active Appearance Models [13]. We defne
the facial feature points mostly located around the eyes, nose,
eyebrows, mouth, and boundar of a face. These points will
provide general shape information about any faces. First, we
use the General Whole Face Shape Template with open eye
(or the one with closed eye) to initialize the whole face, thus
get the approximate locations of two outer comers of eyes.
Then, we apply local ASM to mouth to estimate mouth
contour and get the true edge of mouth by Canny operator. If
the eyes are detected as open eye and the mouth is detected
as 0 shape mouth, then choose the Whole Face Templates
for open eye and 0 shape mouth to search the whole face
contour; and so on. Take advantage of the multi-resolution
searching [13], we can get the whole face contour when
ASM converges or te maximum times of ASM iteration is
reached. Totally 68 feature points are located automatically
on the faces.
Figure 4. Facial feature points of 2D facial image
The approach preserves the local neighbor stucture of a
facial image and increases global discriminant information.
This has many positive benefts, such as compressing data
thereby reducing storage requirements, removing
unnecessary noise, extracting efective features for realizing
visualization of higher dimensional data.
III. 3D FACE SHAPE RECONSTRUCTION
First, for training, we use 200 laser-scanned 3D faces in
the USF Human-ID database [5]. Each face in the database
has 75972 vertices. The original images consist of
considerable dense points in 3D space. We exactly align the
range images and approximate the original range images
with a simple and regular mesh by the multi-solution ftting
scheme [14] for better performance. The geometr of a 3D
face model is represented with a shape-
vector S = (Xp,ZpXz, .. ,y,Z
n
? E R3n [15]. PCA
is conducted to get a more compact and regular shape
representation of face by the primar components. Here,
S is the average shape, P E R3nxm is the matrix of the frst
m eigen vectors (in descending order according to their
eigenvalue) [15]. A new face shape S I can be expressed as
(4)
Where a = (al'az,,a
m
)T E Rm is the coefcients
of the shape eigenvectors.
We selected t 2D facial feature points for 3D
reconstruction as discussed in section II.C. We
describeS
j
= (Xp,Xz,,X
t
')T E RZt is the set of
X, Y coordinates of feature points on the surface as the sub
shape vector of S . A new face based on the
X, Y coordinates of those feature points, can be expressed
as
(5)
S
-
RZt P RZtxm X Y . Where
j
E and
j
E are the , coordmates
of the feature points on S and P , respectively.
In the reconstuction step, we transform face coordinate
to image coordinate to obtain the transformed shape S
S =cS +T (6)
Where T E RZt is the translation vector and C E R is the
scale coefcient, assumed fontal view and no rotation
matrix required.
We apply an iterative procedure to compute the face
geomet coefcient a . Let S
j
be the initial value of S
and Y and T
y
be the average ofsets of all t feature points of
S to the original along X, Y axes, respectively, then
T
1 "
(Y,T
y
) =-
L.
S
f
(7)
t i
=1
C
I:=I
(S - (Y,T
y
)T ,S)
I:JI
S
II
Z
(8)
The face geometry coefcient a can be computed using
[15], derived fom (4) and (5):
Where A = diag( V
I
' V z' ... , V
m
) is applied to constrain
a to avoid the outliers, A is the weighting factor, and V
i
is
V3-34
2010 International Confrence on Educational and Information Technolog (ICEIT 2010)
the i -th eigenvalue. Then a new S can be obtained by
applying i to Eq.(5).
IV. TEXUR CORRESPONDENSE
I an analysis-by-synthesis loop, the morphable face
model can be fting to a novel face shown in an input image
I
i
npu
t
(x, y) , aiming at fnding the model parameter / for
texture correspondence. For fting the model to an image,
we only consider the centers of triangles, about O.3mm
2
in
size [16]. The illumination model of Phong approximately
describes the difse and specular refection on a surface. On
each vertex k , the red channel is
Ir
,
modez(X,y) Rk
.
Lr
,
amb
+
Rk
.
Lr
,
dr (nk
.
T) + ks
.
Lr
,
dir
.
(", vk)
(10)
Where R
k
is the red component of the difse refection
coefcient stored in the texture vector T , Lr
,a
mb and
Lr
,
dir are te red intensities of the ambient and direct light,
I is the direction of illumination, ks is the specular
refectance, V defnes the angular distribution of the
specularities, 1 is the viewing direction, and
r
k
= 2 (n
k
1) n
k
-/ is the direction of maximum
specular refection [16, 5]. Green and blue channel are
computed in the same way. The transformed
I
r
,modei' I g
,model
and
I
b,model
are drawn at a position
(Px,p
y
) in the fnal image
Imodel
. The optimization
algorithm stars fom the average face at a position and
orientation roughly aligned with the face in the image. The
gradient descent algorithm is applied to minimize the sum of
square diferences over all color channels and all pixels in
the input image and the synthetic reconstruction.
Ek::IIII
i
npu
t(Xk'Yk)-Imodel
(xk'Yk)lr
(11)
kEK
Where K is stochastic point set,
(Xk , Yk)
is the
barycentric of triangular faces in the projection point of
image plane. For each iteration of the optimization process,
the ftting algorithm analytically computes the gradient of the
cost fnction and then updates the parameters:
a
E
/ / -AI (12)
If
IE -Et
a
stl
is smaller than the given threshold e, the
iteration is complete and the parameters / is updated.
Figure 5. The fowchart of 3D face reconstruction
V. CONCLUSION
This paper presents a new attempt to a practical 3D face
reconstruction system using a single 2D image. On the basis
of a thorough research on preprocessing, 3D face shape
reconstruction and texture correspondence, a lively 3D face
model is obtained. Given the large variations in illuminations
and changes in viewpoint fom font to profle, the
performance of our algorithm seems promising. The result
clearly demonstrates a potential possibilit of creating a cost
efective, easy-to-use facial model acquisition system
applicable to a wide range of 3D face reconstruction. For
frther evaluation, the method needs to be applied to a larger
database.
ACKNOWLEDGMNT
This work was supported partly by the National Natural
Science Foundation of China (Grant No.60973060),
Specialized Research Fund for the Doctoral Program of
Higher Education (Grant No. 200800040008), The Doctoral
Candidate Outstanding Innovation Foundation (Grant No.
141092522) and the Fundamental Research Funds for the
Central Universities (Grant No. 2009YJS025).
REFERECES
[I] Martin D. Levine, Yingfeng (Chris) Yu, "State-of-the-art of 3D facial
reconstruction methods for face recognition based on a single 2D
training image per person", Patter Recognition Letters, vol. 30, no.
10, pp.908-913, IS July 2009.
[2] C. Bregler, A. Hertzmann, H. Biermann, "Recovering non-rigid 3D
shape fom image streams", In: Proc. IEEE Comput. Soc. Conf on
Computer Vision and Patter Recognition, vol. 2, 2000, pp. 690-696.
[3] G. Himaanshu, AK RoyChowdhury, R. Chellappa, "Contour-based
3D face modeling fom a monocular video", In: British Machine
Vision Conference, BMVC04, Kingston University, London,
September 7-9,2004.
[4] B. Moghaddam, J.H. Lee, H. Pfster, R. Machiraju, "Model-based 3D
face capture with shape-fom-silhouettes", In: IEEE Int. Workshop on
Analysis and Modeling of Faces and Gestures (AMFG), Bice, France,
pp. 20-27,2003.
[5] V. Blanz, T. Vetter, "Face recognition based on ftting a 3D
morphable model", IEEE Trans. Patter Anal. Machine Intell., vol. 25,
no.9, pp. 1063-1074, 2003.
[6] S. Romdhani, V. Blanz, T. Vetter, "Face identifcation by ftting a 3D
morphable model using linear shape and texture error fnctions", In:
Proc. ECCV, vol. 4. pp. 3-19,2002.
[7] D. Jiang, Y. Hu, S. Yan, L Zhang, H. Zhang, W. Gao, "Efcient 3D
reconstruction for face recognition", Patter Recogn., vo1.38, no.6,
pp. 787-798,2005.
V3-35
2010 International Confrence on Educational and Information Technolog (ICEIT 2010)
[8] Sung Won Park, Jingu Heo and Marios Savvides, "3D face
reconstuction", IEEE Computer Society Conference on Computer
Vision and Patter Recognition Workshops, pp.I-8, 2008.
[9] S. Milborrow and F. Nicolls, "Locating Facial Features with an
Extended Active Shape Model", ECCV2008.
[10] I.Sato, YSato, K.Ikeuchi, "Acquiring a radiance distribution to
superimpose virtual objects onto a real scene", IEEE Trans. Visualiza.
Comput. Graph., vo1.5, no.l, pp.I-12, 1999.
[II] Sotins Malassiotis, Michael G. Strintzis, "Robust face recognition
using 2D and 3D data: Pose and illumination compensation", voL38,
no.12,pp. 2537-2548,2005.
[12] Ying Li, J.H. Lai, P.e. Yuen, "Multi-template ASM Method for
feature points detection of facial image with diverse expressions", 7th
Interational Conference on Automatic Face and Gesture Recognition.
[13] N. Uchida, T. Shibahara, T. Aoki, H. Naajima, K. Kobayashi, "3D
face recognition using passive stereo vision", IEEE Interational
Conference on Image Processing, 2005.
[14] Gabriel Peyer, "Numerical Mesh Processing", Chapter 4.
[15] Dalong Jiang, Yuxiao Hu, Shuicheng Van, Lei Zhang, Hongiang
Zhang, Wen Gao, "Efcient 3D reconstruction for face recognition.
Patter Recognition", voL38, no.6, pp.787-798, June 2005.
[16] V Blanz, S. Romdhani, T. Vetter, "Face identification across
diferent poses and illuminations with a 3D morphable model", Fifh
IEEE Interational Conference on Automatic Face and Gesture
Recognition, pp. 192-197,2002.
V3-36