Vous êtes sur la page 1sur 4

A Hierarchical Palmprint Identification Method Using Hand Geometry and

Grayscale Distribution Features

Jie Wu, Zhengding Qiu
Institute of Information Science
Beijing Jiaotong University, Beijing, 100044, P.R. China

Abstract However, there are some disadvantages of

ROI-based feature extraction: First, Image quality is
the key ingredient in ROI-based methods. The
Palmprint identification, as an emerging biometric blurriness caused by capture device’s not clean enough
technique, has been actively researched in recent years. or hand’s shaking when images are being captured will
In existing palmprint identification algorithms, ROI make some texture information unavailable. Second, a
segmentation is always a must step. This paper ROI image only takes a small proportion of the
presents a novel hierarchical palmprint identification primitive hand image, which will lose some useful and
method without ROI extraction, which measures hand essential information required in identification. Third,
geometry and angle values in coarse-level feature time taken by ROI extraction will extremely increase
extraction, and calculates unit information entropy of the system processing time(See Table 1).
each subimage to describe grayscale distribution as To avoid these drawbacks, we present a new
the fine-level feature. We utilize the grayscale hierarchical palmprint identification method here,
distribution variance caused by particular positions of using the whole hand image instead of ROI district.
principle lines, wrinkles and minutiae in primitive Under normal illumination, hand geometry and some
hand images as the palm descriptor instead of angle values are extracted in coarse-level stage. Angle
ROI-based features. Experiments were developed on a values are adopted as a complement to the geometrical
database of 990 images from 99 individuals. Accuracy information provided by line segments. During the
up to 99.24% has been obtained when using 6 samples fine-level identification, we attempt to find grayscale
per class for training. A performance comparison variance caused by different positions of principal
between the proposed method and ROI-based PCA lines, wrinkles and minutiae, while skin colors
method was made also. contribute much to the grayscale variance also. The
concept of unit information entropy is introduced as
1.Introduction the description of grayscale distribution in this level.
The paper is organized as follows: Section 2
The last decade has witnessed a great development presents the preprocessing algorithm and steps of
on palmprint based personal identification. As one of coarse-level feature extraction. Section 3 focus on
the developing biometric techniques, palmlprint fine-level feature extraction. Experimental reulsts are
identification is becoming a popular and convincing reported in Section 4. Section 5 summarizes this paper.
solution for identifying persons’ identity since adult
palmprint is proved to be a unique and stable personal 2.Preprocessing and Coarse-level Feature
physiological characteristic. Extraction
So far, studies on palmprint identification are
mainly based on ROI district regularly cropped in
preprocessing stage. The usually used ROI-based
2.1. Preprocessing
features can be generally grouped as follows: texture
features[1,2], transformed space features[3,4], Hand images are captured in an original size of
statistical features[5,6] and subspace features[7]. 1792 × 1200 pixels and stored in JPEG mode. Previous
studies always employ rotation and translation to

The 18th International Conference on Pattern Recognition (ICPR'06)

0-7695-2521-0/06 $20.00 © 2006

Authorized licensed use limited to: Jerusalem College of Engineering. Downloaded on July 28, 2009 at 23:06 from IEEE Xplore. Restrictions apply.
primitive images before ROI cropping in this stage cross points on a palm as coarse-level features, and
because the position and direction of a hand may vary categorized all samples into 7 groups. This method
from time to time when being captured. We don’t only goes well on a precondition that most samples
correct all the samples to the same fixed angle and have a high quality, which is hard to achieve by reason
space position, but instead manage the adjustment in of some easily confronted influences such as
fine-level stage detailed in Section 3. Following steps illumination, hand speciality or inexpectant movement
are executed in our preprocessing: when being captured.
Transform JPEG images to BMP mode, next crop In this paper, we measure some line segments and
primitive images to derive the largest circumscribed angle values constructed by these line segments as
rectangle of the hand. Resize these circumscribed coarse-level features. This method remains unaffected
images to 1/16 size of their original, then we obtain even when the primitive image happens to blur. Line
images whose sizes are distinct between individuals segments used here involves the lengths of middle
for the sake of palm size difference caused by size finger and ring finger. Edges of triangles constructed
difference of skeleton. Finally denoise the resized by lines connected finger valleys are included also.
images with midvalue filter and employ histogram For convenience, the four finger valleys from thumb to
equlization as image enhancement. The preprocessed little finger are named A, B, C, D, respectively. E is
images are grayscaled with both length and width defined as the peak of middle finger, F is the peak of
between 200-300 pixels, according to palm size ring finger. The palm-connected end of middle finger
diversity that reach to a highest of 1.32 times in our G and ring finger H are defined as the mid-point of BC
database and should not be ignored. Figure 1 shows and CD as shown in Figure 2. We pick up line
preprocessing steps and results of the proposed segments AB, AC, AD, BC, BD, CD and finger
method and traditional method. For simplicity, Figure lengths FH, EG as the description of hand geometry.
1(a)is not restrictly scaled. Angles ğCBD, ğDBA, ğBAC, ğCAD, ğBDA,
ğCDB, ğDCA, ğBCA contained in triangles made
by lines segments mentioned above are seen as a
complement to the traditional geometrical information.
Angle values are calculated through the law of cosine
which is defined as below:
(a) ğCBD=arccos(((BC)2+(BD)2-(CD)2)/2(BC)(BD)) (1)
Rest angle values are calculated in the same way.
(b) D
Figure 1. (a)Our preprocessing result.
(b)Traditional preprocessing steps and result A
Figure 2. Coarse-level feature extraction
2.2. Coarse-level feature extraction
The result of coarse-level feature extraction is a
Hand geometry[8,9], including hand width, hand 16-dimension vector V=[AB, AC, AD, BC, BD, CD,
length, finger width, finger length, has been utilized to FH, EG, ğ CBD, ğ DBA, ğ BAC, ğ CAD, ğ
discriminate between individuals. Rigorously, due to BDA, ğCDB, ğDCA, ğBCA]. Each test sample
its vulnerability and inconsistency, hand geometry generates such a feature vector and matches with
should not be taken as a distinguished feature alone, sample templates from totally W training classes in
especially in identification, which requires much Euclidean distance. The w (1<w ≤ W/10) nearest
higher recognition ability than verification. Although, neighbours are found as the coarse-level identification
considering hand geometry does have its recognition result. The number of classes that will take part in
ability just not enough to identify, we use these fine-level processing is significantly reduced in
geometrical features to classify only in coarse-level coarse-level. The elimination of non-qualified classes
because the system doesn’t require that precisely in not only limit the matching scope but also lower the
this stage. probability of misclassification in fine-level.
The coarse-level stage can reduce the processing
time in a great degree and is adopted prevalently. 3.Fine-level Feature Extraction
Dai[10] computed the number of principal lines and

The 18th International Conference on Pattern Recognition (ICPR'06)

0-7695-2521-0/06 $20.00 © 2006

Authorized licensed use limited to: Jerusalem College of Engineering. Downloaded on July 28, 2009 at 23:06 from IEEE Xplore. Restrictions apply.
We gainw classes after coarsely identification as the The nearest neighbour is found as the system’s final
input of fine-level process. The similarities between a identification result.
test sample and the w corresponding classes will be H11 H12 … H1Q
computed in this period. First, according to the stored H11 H12 … H1Q
geometrical information, we adjust the test sample to H21 H22 … H2Q
H21 H22 … H2Q
the same position and direction of the w classes
… … …
respectively. Relying on the fact that the grayscale … … …
distribution is identical within the same class and HP1 HP2 … HPQ HP1 HP2 … HPQ
obvious distinct between classes, we present an
Figure 3. Matrix of unit information entropy
entropy based method to measure the grayscale
distribution as fine-level feature.
For a M × N sized, 256 grayscale digital image A, 4.Experimental Setup and Results
f(x,y)(0 ≤ f(x,y) ≤ 255) is the grayscale value of (x,y),
the global information entropy of A is defined as: An experimental database was established with
M N M N hand images captured by our own capture device made
H = −¦¦ pij log pij , pij = f (i , j ) / ¦¦ f (m, n) (2) up of a camera kept at a fixed distance from a table
i =1 j =1 m =1 n =1 without any peg or limitation and connected with a
Global entropy describes the global grayscale computer. Under normal daylight illumination without
statistical characteristic within an image. It only relates fiercely change, users were asked to place their right
to appearance probability of pixels belong to each hands on a flat soft surface with palm side facing
grayscale but not concerns with positions of these skyward, fingers stretching to the best extent, and no
pixels. Different images will share the same global obvious dirt on hands. Users were also asked to
entropy when they coincidentally have the same replace their hands after each capturing, and none of
probability of each grayscale. So the global entropy these users had received any training before. The
can’t be adopted as a distinguished feature of images. database includes 99 individuals aged from 20 to 36 of
In contrary to global entropy, local entropy has the both gender, 10 images from each person with the
ability to describe the grayscale distribution of any resolution of 1792 × 1200(pixels) and 8-bit colors. 5
given area in an image. Sun[11] combined grid images in 10 of each class were captured at one time,
descriptor(GD) with image information entropy to the rest were captured 2 months later.
describe each GD area’s grayscale distribution. We
continue to use the idea of local entropy in calculating 4.1. Coarse-level experimental result
each subimage’s grayscale distribution as the spatial
information combined feature. The coarse-level identification result is clearly
The M × N sized primitive image was divided into shown in Figure 4, when training set includes only 1
totally K subimages, each of which can be called a unit sample from each class as an example. Figure 4(a) and
with a size of M’ × N’. K is defined as follows: (b) shows the results of method only using line
K = P × Q P = ¬« M / M '¼» Q = ¬« N / N '¼» (3) segments as coarse-level features and method with
Entropy of each unit U st ( s = 1, 2...P , t = 1, 2...Q ) is angle values added as a complement, respectively.
The x-aixs values in the figure are the test sample
define as:
M ' N'
index from 1 to 891. To each test sample, we findthe
H st = −¦¦ p 'ij log p 'ij w nearest neighbours in similarity measurement and
i =1 j =1 obtain aw sized array with the original w value 50.
M ' N' The ith element is the ith nearest neighbour’s class
p 'ij = f '(i , j ) / ¦¦ f '( m, n) index. Each y-axis value means what position the class
m =1 n =1
index of this test sample is in the array. The proposed
By calculating the corresponding unit information method result displayed in Figure 4(b) performs
entropy values of K subimages, we can obtain the unit significantly better then the previously used algorithm
entropy matrix of image A as shown in Figure 3. shown in Figure 4(a), which implies the addition of
Rearrange the unit entropy matrix to a vector can angle information increase the dispersion between
we derive the fine-level feature H=(H11,H12,…,HPQ). classes and the convergence within a class. In Figure
Then measure the similarity between the fine-level 4(b), only 10 in 891 samples have a y-axis value
featre vector H from the test sample and H’i(i=1,2…w) bigger than 8. And there are totally 708 images in 891
from the corresponding w classes in Eulidean distance. with a y-axis value 1, that means only the coarse-level
P ,Q
di = ( ( H mn − H i',mn )2 )1/ 2
method can achieve the accuracy high up to 79.46%.
m =1, n =1
According to the data analysis, we choose 8 as the

The 18th International Conference on Pattern Recognition (ICPR'06)

0-7695-2521-0/06 $20.00 © 2006

Authorized licensed use limited to: Jerusalem College of Engineering. Downloaded on July 28, 2009 at 23:06 from IEEE Xplore. Restrictions apply.
final w value, which is much smaller than the total (5)Our method is effective for the deficiency of
class number 99. Making thew value ultimately 8 texture legibility in images like in our database since
assures 98.9% of the test samples are correctly color features can make up the lack of texure.
processed in coarse-level, which can reach a highest of
99.5% when using 6 samples per class for training. 5. Conclusion
50 50

45 45

40 40
This paper presents a novel hierarchical method
Coarse-level identification result

Coarse-level identification result

35 35



without ROI extraction for analysis of palmprint. In
20 20 coarse-level, the proposed method adds angle value
15 15

10 10 information as a complement to geometrical line


0 100 200 300 400 500 600 700 800 900

0 100 200 300 400 500 600 700 800 900
segments traditionally used in recognition. Then unit
Test sample No. Test sample No.
information entropy is introduced to evaluate the local
(a) (b) grayscale distribution as the fine-level feature. Results
Figure 4. Results of coarse-level classification
are promising with accuracy up to 99.24% when 6
(a) Line segments only (b)Angle information added
samples in 10 of each class were used for training and
93.15% when only 1 sample per class for training.
4.2. Fine-level experimental result
6. References
A performance comparison of our method and
traditional PCA algorithm are made in Table 1, while
the latter need to extract ROI district in preprocessing. [1] D. Zhang, W. Shu. Two novel characteristics in palmprint
verification: datum point invariance and line feature
Table 1. Performance comparison between methods mathcing. Pattern Recognition, vol. 33, no. 4, 1999,
Our method PCA method pp.691-702.
[2] N. Duta, A.K. Jain, K.V. Mardia. Matching of palmprint.
Train/Test samples per class(total) Pattern Recognition Letters, 2001,vol. 23, no. 4, pp.477-485.
1/9 (99/891) 67.34% [3] W.X. Li, D. Zhang, Z.Q. Xu. Palmprint identification by
fourier transform. Int’l Journal of Pattern Recognition and
2/8 (198/792) 97.47% 80.18% Artificial Intelligence, 2002, vol. 16, no. 4, pp. 417-432.
3/7 (297/693) [4] W.K. Kong, D. Zhang, W.X. Li. Palmprint feature
98.85% 90.04%
extraction using 2-D Gabor filters. Pattern Recognition,
4/6 (396/594) 98.99% 92.25% 2003, vol. 36, no. 10, pp. 2339-2347.
5/5 (495/495) [5] Y.H. Pang, C. Tee, A.T.B. Jin, et al.. Palmprint
99.19% 93.74%
authentication with Zernike moment invariants. Proc. of the
6/4 (594/396) 99.24% 95.96% 3rd IEEE Int’l Symposium on Signal Processing and
Average time per sample Information Technology, Germany, 2003, pp. 199–202.
Preprocessing 1.654 s 11.781 s [6] X.Q. Wu, K.Q. Wang, D. Zhang. Palmprint recognition
using directional line energy feature. Proc. of the 17th Int’l
Training/Matching 0.337/2.833 s 0.860/1.294 s Conference on Pattern Recognition, England, 2004, no. 4, pp.
475 – 478.
Through the analysis of our fine-level experimental [7] G.M. Lu, D. Zhang, K.Q. Wang. Palmprint recognition
result, there comes the following comments: using eigenpalm features. Pattern Recognition Letters, 2003,
(1)Traditional ROI extraction employed in vol. 24, no.9-10, pp. 1463-1467.
preprocessing takes much longer time than any other [8] S.R. Raul, S.A. Carmen, G.M. Ana. Biometric
stages in the system, which can be perfectly avoided in identification through hand geometry measurements. IEEE
our preprocessing method. Transaction on Pattern Analysis and Machine Intelligence,
2000, vol. 22, no. 10, pp. 1168-1171.
(2)The accuracy of PCA method vary dramatically
[9] A. Kumar, C.M. Wong, C. Shen, A.K. Jain. Personal
when the number of training samples changes. verification using palmprint and hand geometry biometric.
(3)Making M’=28, N’=25 in the proposed method, Proc. of 4th Int’l Conf. on Audio and Video Based Biometric
we derived the accuracy 93.15% when only 1 sample Person Authentication, Guildford, UK, 2003, pp. 668-678.
per class was used for training and a highest accuracy [10] Q.Y. Dai, Y.L. Yu, D. Zhang. A palmprint classification
99.24% when 6 training samples per class were used. method based on structure features. Pattern Recognition and
(4)The deficiency of our method is that the time Artificial Intelligence. 2002, vol. 15, no. 1, pp. 112-116.
taken by matching is longer than PCA. But the [11] J.D. Sun, X.S. Wu, L.H. Zhou. Entropy based image
retrieval. Journal of Xidian Univeristy, 2004, vol. 31, no. 2,
obvious enhancement of performance may compensate
pp. 223-228.
the time consuming. This disadvantage will be
improved in our future research.

The 18th International Conference on Pattern Recognition (ICPR'06)

0-7695-2521-0/06 $20.00 © 2006

Authorized licensed use limited to: Jerusalem College of Engineering. Downloaded on July 28, 2009 at 23:06 from IEEE Xplore. Restrictions apply.