Académique Documents
Professionnel Documents
Culture Documents
Authorized licensed use limited to: Military College of Signals. Downloaded on April 23, 2009 at 14:52 from IEEE Xplore. Restrictions apply.
2.2 Estimating and refining the feature
2. HUMAN FACE RECOGNITION
locations
In order to identify the robot operator, we must locate
the facial features. The recognition procedure can he The feature locations are estimated according to
anthropometric model. Eye and mouth regions are
divided into three steps as shown in Fig. 2. Face region is
located first, and then facial features are estimated by estimated based on the face model [ I ] . Then nose
projection analysis. According to estimation location, position is estimated based on the model of face and the
refined positions of eyes and mouth.
genetic algorithm is applied to extract the accurate feature
locations.
- 1758 -
Authorized licensed use limited to: Military College of Signals. Downloaded on April 23, 2009 at 14:52 from IEEE Xplore. Restrictions apply.
Then, the subregions are defined according to the
refined positions of the features. But in order to avoid
mistaking the nose with the mouth, the mouth region
should be defined after both eye feature points have been
found, which is the shadow part in Fig. 8. (e, is the central
point of the left eye, e, is the central point of the right eye
and the e, is in the middle of the el and q.)
0 2 4 6 8 10 12 14 16 18 20
Y-coordinate
-Y Proi%ction
(a) Y-Projection
I XProiection idtheEyeWindow
- 1759 -
Authorized licensed use limited to: Military College of Signals. Downloaded on April 23, 2009 at 14:52 from IEEE Xplore. Restrictions apply.
3. GESTURE RECOGNITION From x, y, z, we can get
- 1760 -
Authorized licensed use limited to: Military College of Signals. Downloaded on April 23, 2009 at 14:52 from IEEE Xplore. Restrictions apply.
simplify the problem to find the gesture action by
analyzing the index hand.
- 1761 -
Authorized licensed use limited to: Military College of Signals. Downloaded on April 23, 2009 at 14:52 from IEEE Xplore. Restrictions apply.
In addition, we use 1 3 test images 1j=l,2, ..., 13) genetic algorithm, the computational complexity is
also changed to same square templates. We match each reduced significantly, while the accuracy is bener than
test image with 18 training templates .Sj(i=1,2,..., 18). The other algorithms. For gesture recognition, the optimal
matching error Rjj between the test image and training procedure is with HLS segmentation, morphological
templates is calculated as follows: filtering, hand block labeling, geological transform and
template matching. It provides good correct recognition
(4) ratio, robustness and speed. The simulation results
demonstrate that the proposed techniques provide a very
good performance.
References:
[l] k M. Alattar and S.A. Rajala, “Facial features localizatonin h n t
new head and shoulders images,” Pmc. ofICASSP, Vol. 6, pp. 3557-
35-50, P h e U S A , 1999.
[2] C. H. Lin and I. L. Wu, “Automatic facial f e extraction by
genetic algorithms,” IEEE Tram Image Processing, vol. 8, :no. 6, pp.
834-845,June 1999.
[3] K M. Lam and H. Yan, ”Location and exhaction the eye in human
k c images,” P d t m Recognition,vol. 29, no. 6, pp. 771-779: 1996.
c
Figure 19. Square templates. [4] Olivelti Research Labomtory hce database,
hm:llwww.ukresearch.an“ifacedatabase.hh
The action of the training template that provides the [5] K S.Tang, K F. M q S.Kwong and Q. He, “Genetic algorithm
minimum matching error is used as the identified action. and their applications,” IEEE S i p / Processing Mzgazine, vol. 13, pp.
The results are perfect with 100% correct decisions. This
U-37, Nov. 1996.
method provides excellent recognition ratio if there are
[6] M. Srinivas and L. M. Pam& ‘’Genetic algorithm A~ surfey;’
enough training templates. In addition, the computational
~ ~ Vol. ~ 6, pp.
t e27, No. , 17-26, June 1994.
complexity of this method is not high. Using C-program,
gesture recognition is done in about 0 . 8 second for the [AT.Fong, F. Con& S. Grange and C. Baur, “Novel interfaces for
whole recognition procedure. m o t e driving: Gesture, haptic and PDA”,h c . ofSPLE, Vol. 4195,
pp.300.311, Boston,200l.
[8] M. C. May, “Gesture-based Interaction with a Pet Rotat,” Proc.
4. CONCLUSIONS
of 16th National Conference on Artificial Intelligence, pp. 628-
In this paper, we have presented efficient techniques for 633, Orlando, USA, July 18-22, 1999
face and gesture recognition for robot control. The face [9] S. lba, W. J. M. Vandq C. 1. J. Paredis, and P. K Khwla, “An
recognition algorithm is robust to variations in subject Architecture for Geshm-hased Control of Mobile Robot$” Proc. of
head shape, eye shape, age, and motion such as tilting and the I E E E / W International Conference on Intelligent Robots
nodding of the head. Because we use the projection andsystems (IROS), Vol. 2, pp. 851-857, Oct 1999.
analysis to estimate the feature locations before applying
- 1762 -
Authorized licensed use limited to: Military College of Signals. Downloaded on April 23, 2009 at 14:52 from IEEE Xplore. Restrictions apply.