Académique Documents
Professionnel Documents
Culture Documents
=
1
x
x
=1
(10)
This is calculated for each class, and the averages are stored in the face database.
The above mentioned procedures are performed periodically whenever there is a free excess computational
capacity. The stored data is then utilized in the identification mode, decreasing execution time thereby increasing
the performance of the entire system.
Iris enrolment: The approach described by Daugman algorithm for iris Recognition is utilized (9) The iris
enrolment mode was conducted via the Matlab program implemented by Libor Masek (10, 11). Part of the CASIA
DB is utilized as a training set that is stored in the iris DB. The procedure for iris enrolment mode consists of four
steps:
1. Iris image acquisition: In our proposed system, this phase is out stepped and not been
implemented. Instead, it is replaced by large databases of iris images (CASIA) (7). Part of these databases is
utilized as a training set.
2. Iris segmentation: The first step after acquisition is to extract the iris image from the input eye
image. The iris area is considered as a circular crown limited by two circles. The iris inner (pupillary) and outer
(scleric) circles are detected.
14 | www.ijar.lit.az
INTERNATIONAL JOURNAL Of ACADEMIC RESEARCH Vol. 3. No. 1. January, 2011, Part I
3. Iris normalization: Iris segmentation results may appear at different positions and scales and
thus, require normalization. Normalization process maps the circular iris image into a rectangular
representation.
4. Feature extraction and encoding: Once the iris texture is available, features are extracted to
generate the biometric template.
Identification Mode
Face Subsystem: This process could be detailed via three phases:
Phase I: Preprocessing: When a healthcare user face image (T) is given as input to check for face identity, a
preprocessing phase is done as mentioned in face enrollment mode to produce the weight vector .
Phase II: Similarity Matching: The image vector is compared with each face class average i that
represents face keys for each class images previously obtained in the face enrolment mode and stored in the face
database. The Euclidean distance i (8) is used to find out the distance between two face keys vectors and is given
by:
i =
I
= (
K
IK
)
M
t
k=1
(11)
Where i is the Euclidean distance between the image vector and i
th
face class, is the image vector, i is
the average vector of i
th
face class, and Mt is the number of face images. The smallest distance is considered to be
the face match (FM) score result.
Phase III: Score Normalization: To ensure a meaningful combination between face and iris scores, the
scores must be transformed to a common domain. Various techniques have been proposed in the literature to
normalize the matching score of biometric systems. It has been found that the Min-Max and z-score normalization
techniques, generally outperform other techniques (12). Accordingly, in this thesis, we experimented with the use of
the Min-Max normalization formula, as given by:
SFn =
FM - mIn (FM
t
)
max(FM
t
)- mIn (FM
t
)
(12)
Where: SFn is the normalized score, FMt is the score set of all face matching scores of training set, FM is
the face score match prior to normalization, and Max(FMt) and Min(FMt) specify the end points of FMt.
This matching score SFn is used as input for the fusion module where the final matching score is generated.
Iris subsystem: Matching: The identification procedure for iris will follow the same algorithm as that of the
enrollment procedure. Then a comparison between the requesting healthcare user template and all templates in
the iris database will be performed to obtain the least dissimilarity score which will be considered as the iris
matching (IM) score. This computational step is performed using Hamming Distance (HD). The iris matching score
is used as the second input for the fusion module where the final matching score is generated.
Multimodal Fusion System: Multi-
modal systems can perform fusion in one
of three different modes; serial, parallel, or
hierarchical mode (13). The presented
multimodal system works in parallel mode
where all the traits are input at the same
time and the multimodal system uses all
the scores together to make a decision.
When a healthcare user tries to access the
highest level of the database, his or her
face and iris images are acquired. The
processing of these images, up until
fusion, is carried out separately in the face
recognition and the iris recognition
subsystems. After normalization, face and
iris matching scores are combined into a
single matching score (scalar). This score
is compared to a threshold to make the
final decision about whether to accept or
reject a healthcare user. This threshold can be adjusted such that the performance of the system will meet the
requirements of the domain. If the healthcare user is identified the system will allow him to access this highest level
of database. The steps involved during the proposed multimodal fusion system are shown in Figure 1.
Match Scores Fusion
Fusion at The matching score level is implemented, as it offers the best trade-off in terms of the information
content and the ease in fusion than other levels. The normalized face score SFn is combined with iris match score
(IM) using matcher weighting fusion scheme (13, 14) as given by:
F= (1- W)SFn + (W)IM (13)
Where: F is the total weighted fusion score, W is a weighting factor where 0 < W < 1 we choose it between
0.1 and 0.9, SFn is the normalized face matching score, and IM is the iris matching score.
The performance of classifiers is different, so it is necessary to exercise different weights to combine
individual classifier, i.e. face and iris. In order to calculate the optimum fusion weighting factor W, we performed
several trails where W is varied from 0.1 to 0.9 using step length of 0.1. When face weight changes from 0.1 to 0.9,
iris weight changes from 0.9 to 0.1 to give a total fusion weighting factor of 1.0. In order to calculate the optimum
Fig. 1.
B a k u , A z e r b a i j a n | 15
INTERNATIONAL JOURNAL Of ACADEMIC RESEARCH Vol. 3. No. 1. January, 2011, Part I
threshold for each weight, we perform several trials with different thresholds. The overall performance in each step
is then evaluated using the Receiver Operating Characteristic (ROC) curve (15) and the optimum weighting factor
is defined as the factor at which the weight gives the highest performance.
Threshold comparison and decision
The acceptance or rejection of a healthcare user is dependent on the match score (weighted fusion score F)
falling above or below the threshold. The threshold is adjustable so that the biometric system can be more or less
strict, depending on the requirements of the biometric application. The optimum threshold for each weight that
gives minimum FAR and acceptable FRR is calculated. Several trials with 0.02 increments from 0.15 to 0. 50 were
performed. These intervals have the most effective changing in FAR and FRR rates. The overall performance in
each step is then evaluated using the ROC curve. This process is repeated for each weight and the overall
performance for each weight is evaluated to choose the best weight. When weighted fusion score (F) is less than
the preset threshold then the healthcare user is identified, and thus authorized to the system, otherwise the user
access is denied.
3. EXPERIMENTS
Database Description
The ORL face set contains 10 different images of 40 distinct persons. For some persons, the images were
taken at different times, varying lighting slightly, and facial expressions. The size of each image is 92 x 112 (width x
height), 8-bit gray levels. The second DB is CASIA iris database originated from the National Laboratory of Pattern
Recognition (NLRP). This database (Version 1.0) includes 756 iris images from 108 eye persons. The images are
grayscale with a pixel resolution of 320 280 (a 320x280 pixel photograph of the eye taken from 4 centimeters
away using a near infrared camera).
7 face images for each class in ORL face database are chosen randomly out of 10; hence 280 face images
are used. From CASIA iris database, 40 eye classes are selected, 7 iris images for each, hence 280 iris images are
used. The databases are divided into two parts; clients and impostors. A total of 30 persons were selected to act as
clients and the remaining ten persons as imposters. When a person acted as client, several samples for each
person are chosen as training samples, and three samples are chosen as testing samples. When a person acted
as imposter, 14 samples (7 for face and 7 for iris) were used for testing.
Experiment Description
Experiments are conducted on three biometric systems, namely, face, iris, and a multimodal system using
face and iris fusion. Several experiments are conducted for the individual unimodal system (face and iris,
separately) to decide on: the optimum number of training and test images to be used, and the best training sets for
each individual unimodal to be used. Based on these experiments results, the best from each unimodal subsystem
are utilized for the multimodal fusion system that is applied in highest security level in EMR security system. The
system performance is tested for face and iris separately, and for the proposed multimodal fusion system. Results
from multimodal fusion system are compared with the corresponding results from individual unimodal subsystems
to validate its effect on the security performance.
To perform this study two groups (with different images), two experiments in each (with different number of
images) making a total of four combination sets, are conducted on face and iris subsystems, and fusion system,
separately. The following details these experiments.
Changing the Number of Images in Training Sets
To demonstrate the effect of changing the number of images in training sets on the performance of the
system two combination sets (CS) on group 1 are formulated:
CS1 (1-2, 5-7): The first 2 samples are used in the enrollment phase; while the last 3 samples are used for
testing.
CS2 (1-4, 5-7): The first 4 samples are used in the enrollment phase; while the last 3 samples are used for
testing.
Changing the Quality of Images in Training and Testing Sets
This will form group 2. The same experiments as in group 1 is repeated but on different images for training
and testing sets to study the effect of image quality in relation to the number of training images on the overall
performance of the system.
CS3 (6-7, 1-3): Other 2 samples (the last 2) are used in the enrollment phase; while other 3 samples (the
first 3) are used for testing.
CS4 (4-7, 1-3): Other 4 samples (the last 4) are used in the enrollment phase; while other 3 samples (the
first 3) are used for testing.
4. RESULTS
Experiments were done at 2 levels. In level I experiments face and iris are tested individually. In level II
experiments the proposed multimodal system is tested.
Performance Analysis of Unimodal Subsystems
Face unimodal: Figure 2 is the ROC curve showing the performance of CS1 and CS2 for the face when
operated as unimodal subsystem in group 1. On the ROC curve, the higher the line is drawn, the grater the GAR is
16 | www.ijar.lit.az
INTERNATIONAL JOURNAL Of ACADEMIC RESEARCH Vol. 3. No. 1. January, 2011, Part I
and therefore the more the accurate is the system. There are different FAR intervals, each of them have a
corresponding GAR value. CS2 (1-4, 5-7) has the highest value of 44% GAR at 0% FAR.
Figure 3 shows the ROC curve of group 2. CS4 (4-7,1-3) has the highest value of 65% GAR at 0% FAR.
Fig.2.
Fig.3.
Iris unimodal: Figure 4 Shows the ROC curve of iris in CS1 and CS2. On this graph CS2 (1-4, 5-7) has the
highest value of 50% GAR at 0% FAR. Figure 5 Shows the ROC curve of group 2. On this graph CS4 (4-7, 1-3) is
just higher by 1% than CS3.
These two CSs (CS2 and CS4) which have higher performance than other CSs are used on the fusion
multimodal biometric identification system. Next, the highest performance from CS2 or CS4 will be considered as
our fusion multimodal biometric identification system.
Fig. 4.
B a k u , A z e r b a i j a n | 17
INTERNATIONAL JOURNAL Of ACADEMIC RESEARCH Vol. 3. No. 1. January, 2011, Part I
Fig. 5.
In all experiments improvement in RR performance was noticed with increasing the number of training set
images with different percentage, as shown in Table 1 and Table 2. In group 1; iris 3.75% (95% to 98.75%), and
face 16.25% (52.5% to 68.75%). In group 2; iris 0.63% (95% to 95.63%), and face 3.75% (71.25% to 75%).
However, this improvement is not practically significant as for face unimodal the performance in group 2, with 2
images CS3, is better than in group 1 with 4 images CS2. Furthermore, in iris unimodal the performance in group 2,
with 4 images CS4, does not show significant improvement with increasing the number of images in training set
(only 0.63%) over CS3. This indicates that the number of images alone without including various image qualities
will not insure performance improvement. By this we mean, the overall group of images should be composed of
different image quality (from poor to best quality) to match the actual circumstances of the real user. The results of
this study showed that iris has better performance than face.
Table 1. Performance Comparison between Face and Iris for Group 1.
Combination Sets
d'
FAR 0%
GAR
RR
Face Iris Face Iris Face Iris
CS1 (1-2,5-7) 0.67 3.13 16% 91% 52.50% 95.0%
CS2 (1-4,5-7) 0.92 4.22 44% 98% 68.75% 98.75%
Table 2. Performance Comparison between Face and Iris for Group 2
Combination Sets
d'
FAR 0%
GAR
RR
Face Iris Face Iris Face Iris
CS3 (6-7,1-3) 1.25 3.05 49% 91% 71.25% 95%
CS4 (4-7,1-3) 1.69 3.58 56% 92% 75% 95.63%
Performance Analysis of Multimodal Fusion System
In CS2 the highest performance is achieved with iris weight (IW) of 0.8 and face weight (FW) of 0.2 with
99.38% GAR at 0 % FAR. While in CS4 the highest performance is achieved when iris weight (IW) is 0.8 and face
weight is 0.2 with 98.75% GAR at 0 % FAR, as shown in Table . Figure 6 shows the ROC curve for the
performance of CS 2 and CS4. Form the graph we notice that CS2 (1-4,5-7) has better performance than CS4.
Table 3. Performance Comparison of CS2 and CS4 in fusion system
Combination set d'
FAR 0 %
GAR
RR
CS2
(IW 0.8, FW 0.2)
4.62 99% 99.38%
CS4
(IW 0.8, FW 0.2)
4.14 98% 98.75%
18 | www.ijar.lit.az
INTERNATIONAL JOURNAL Of ACADEMIC RESEARCH Vol. 3. No. 1. January, 2011, Part I
Fig. 6.
Comparison between the Three Security Systems
1. Fusion of multiple biometrics at the score level produces new scores whose distributions for genuine and
impostor users exhibit a higher degree of separation than those produced by individual matchers. Multimodal
system has higher d' value than unimodal systems.
2. The best CS for fusion is found to be CS2 (1-4,5-7) with GAR of 99% , RR of 99.38 % at 0% FAR, and d'
equal to 4.62
3. Fusion of face and iris improved the system performance as shown in Table 4. In CS2 the Recognition
rate (RR) increases from 68.75% (face biometric) and 98.75% (iris biometric) to 99.38% (fusion system with CS2).
In CS4 the Recognition rate increases from 75% (face biometric) and 95.63% (iris biometric) to 98.75% (fusion
system with CS4). This is represented in Figure 7
4. We observed that multimodal biometric is a way to reduce the quality requirement of images.
Table 4. Performance Comparison between Face, Iris and Fusion
Combination Sets
d'
FAR 0%
GAR
RR
Face Iris Fusion Face Iris Fusion Face Iris Fusion
CS2 (1-4,5-7) 0.92 4.22 4.62 44% 98% 99% 68.75% 98.75% 99.38%
CS4 (4-7,1-3) 1.69 3.58 4.14 56% 92% 98% 75% 95.63% 98.75%
The best performance rates are highlighted
Fig. 7.
Critical Recognition Rate
Recognition rate is the function of balancing between false acceptance and rejection rates, and recognition
reaches the highest when FAR and FRR are almost equal. This is acceptable for civil applications, but when
dealing with high security systems FAR should be as low as possible with acceptable FRR.
B a k u , A z e r b a i j a n | 19
INTERNATIONAL JOURNAL Of ACADEMIC RESEARCH Vol. 3. No. 1. January, 2011, Part I
5. CONCLUSION
The presented Master thesis proposes the application of multimodal biometric security system to the highest
level of security in the hierarchical architecture of electronic medical record. It is proved that the combination of iris
and face unimodal into our proposed multimodal fusion system has higher performance than each unimodal
separately. We observed that multimodal biometric is a way to reduce the quality requirement of images.
REFERENCES
1. Barrows Randolph and Clayton Paul (1996) Privacy, Confidentiality, and Electronic Medical
Records, Journal of the American medical informatics association, vol. 3(2): 139-148.
2. Delac, K. and Grgic, M. (2004). A survey of biometric recognition methods, In Proceedings of 46
th
international symposium electronics in marine, ELMAR-2004, Zadar, Croatia, 184-193
3. Ross, Arun and Jain Anil (2003) Information Fusion in Biometrics, Pattern Recognition Letters, vol.
24 (13), 21152125.
4. Nandakumar K. (2008) Multibiometric Systems: Fusion Strategies and Template Security, Ph.D.
Thesis, Michigan State University, USA
5. Matlab, Version 7.0.0.19920 (R14), the MathWorks. Inc, Access date, June 22, 2010, from:
http://www.mathworks.com/products/matlab/
6. Olivetti & Oracle Research Laboratory (2002) The Olivetti & Oracle Research Laboratory Face
Database of Faces, Access date, June 21, 2010, from:
http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html.
7. Chinese Academy of Sciences Institute of Automation (2003) Database of 756 Greyscale Eye
Images, Version 1.0, Access date, June 21, 2010, from:
http://www.cbsr.ia.ac.cn/english/Databases.asp
8. Turk, M. and Pentland, A. (1991) Eigenfaces for recognition, Journal of Cognitive Neuroscience,
Vol. 3:7186
9. Daugman, J. (2009) How iris recognition works, Chapter 25 in: The essential guide to image
processing, by Bovik, A., Edited by: Elsevier BV
10. Masek, L. (2003) Recognition of Human Iris Patterns for Biometric Identification, Bachelor Thesis,
University of Western Australia, Australia
11. Masek, L. and Kovesi, P. (2003) MATLAB Source Code for a Biometric Identification System
Based on Iris Patterns, the School of Computer Science and Software Engineering, University of
Western Australia
12. Jain, A.; Nandakumar, K. and Ross, A. (2005) Score Normalization in Multimodal Biometric
Systems, Pattern Recognition, vol. 38(12), 22702285
13. Jain, A., Ross, A. and Prabhakar, S. (2004) An Introduction to Biometric Recognition, IEEE,
Transactions on Circuits and Systems for Video Technology, Special Issue on Image- and Video-
Based Biometrics, Section 8. Multimodal Biometric Systems, vol. 14(1): 4-20
14. Ramli, D.; Samad, S. and Hussain, A. (2009) An Adaptive Fusion using SVM based Audio
Reliability Estimation for Biometric Systems, in Proceedings of the World Congress on Engineering,
London, U.K, Vol. I: 99-104
15. Fawcett, T., (2006) An introduction to ROC analysis. Pattern Recognition Letters, Vol.27(8): 861-
874