Vous êtes sur la page 1sur 9

B a k u , A z e r b a i j a n | 11

INTERNATIONAL JOURNAL Of ACADEMIC RESEARCH Vol. 3. No. 1. January, 2011, Part I


BIOMETRICS IN HEALTH CARE SECURITY SYSTEM,
IRIS-FACE FUSION SYSTEM

Shoa'a JadAllah Al-Hijaili
1,
*, Manal AbdulAziz
2


1
Department of Computer, College of Sciences, Dammam University,
2
Faculty of computing and information technology, King Abdulaziz University

(KINGDOM OF SAUDI ARABIA)
*Corresponding author: Shoaa.jad@gmail.com

ABSTRACT

Background objectives: Security is considered a corner stone for health-care information systems as they
contain extremely sensitive information. The aim is to provide healthcare personnel access to the right information
at the right time while ensuring high patient privacy. Biometrics play an important role in healthcare applications,
especially when there is a need to control access through identification of authorized users. Face and iris
biometrics have been used separately for access security. Although face recognition is user friendly and non-
invasive, it has low distinctiveness. On the other hand, iris recognition is one of the most accurate biometrics, but
must meet stringent quality criteria. The significance of fusing these two biometrics is more than the improvement
in verification accuracy. Enlarging user population coverage and reducing enrollment failure are additional reasons
for combining face and iris for identification.
Methods: We propose to apply the multimodal biometric fusion system to the highest level of security in the
hierarchical architecture of electronic medical record (EMR). Multimodal biometric identification system is built
combining information from both face and iris unimodal. After suitable normalization of scores, fusion is performed
at the matching score level using weighted scores. The effect of different number and quality of training and testing
image combinations is tested on four combination sets (CS1-CS4). The system performance is evaluated on the
Olivetti Research Laboratory (ORL) face database and Chinese Academy of Sciences: Institute of Automation
(CASIA) version-1 iris database.
Results: 1) Increasing the number of images alone without including various image qualities will not insure
performance improvement. By this we mean, the overall group of images used should be composed of different
image quality (from poor to high quality) to match the actual circumstances of the real user. 2) As unimodal, Iris has
better performance than face, and the best performance was observed with CS2 and CS4. But the overall system
performance improved with the proposed multimodal biometric system. 3) In CS2 the Recognition rate (RR) of the
proposed multimodal biometric system improved by 31% and 1% over face and iris unimodal, respectively. In CS4
the RR of the proposed multimodal biometric system improved by 22% and 3% over face and iris unimodal,
respectively.
Conclusion: The presented Master thesis proposes the application of multimodal biometric security system
to the highest level of security in the hierarchical architecture of electronic medical record. It is proved that the
combination of iris and face unimodal into our proposed multimodal fusion system has higher performance than
each unimodal separately. We observed that multimodal biometric is a way to reduce the quality requirement of
images.

Key words: hierarchical architecture, electronic medical record, EMR, multimodal biometric, unimodal
biometric, authentication security system

1. INTRODUCTION

Medical information is one of the most sensitive types of information. Its misuse could have a very serious
effect on an individuals life. Increasing access to data through Electronic Medical Record (EMR) open a window of
opportunity for misuse of this information, and systems also brings new risks to the privacy and security of health
records. The goal of data availability raises issues of access control, system reliability, and backup mechanisms
(system and data redundancy) 1. Security is a key concern for healthcare systems that contain sensitive data, like
the EMR. Ensuring the security of medical records is becoming an increasingly important problem as modern
technology is integrated into existing medical services. In order to protect the patients privacy, a secure
authentication system to access patient records must be used. Biometric-based access is capable of providing the
necessary security. Security is compromised when the traditional authentication methods such as passwords and
access cards are lost, stolen or shared. Further, misplaced access cards or forgotten passwords require costly
interruptions in already busy days, adding additional costs to an overburdened system. Biometrics play an
important role in healthcare applications, especially when there is a need to control access through identification of
authorized users. Biometrics can help ensure that only authorized personnel gain access to those records. The
term biometric comes from the Greek words bios (life) and metrikos (measure). It is the use of physiological or
behavioral characteristics to determine or verify an individuals identity. Physiological biometrics are based on data
derived from direct measurements of a part of the human body. Fingerprints, iris-scans, retina-scans, hand
geometry, and facial recognition are all leading physiological biometrics. Behavioral characteristics are based on an
action taken by a person. Behavioral biometrics, in turn, are based on measurements and data derived from an
action, and indirectly measure characteristics of the human body. Voice recognition, keystroke-scans, and
signature-scans are leading behavioral biometric technologies. All biometrics follow the same general process of
enrollment, comparison and identification. The particular biometric component must first be enrolled in order to
extract the unique identifying features which in turn are used in the creation of a biometric template. The biometric


12 | www.ijar.lit.az

INTERNATIONAL JOURNAL Of ACADEMIC RESEARCH Vol. 3. No. 1. January, 2011, Part I

template will then be used during a real-time comparison of a presented biometric for determining the degree of
similarity or dissimilarity between the two biometric 2. There are various biometrics traits available like face,
fingerprint, iris, palmprint, hand geometry, and ear, etc. Among the available biometric traits some of the traits
outperform others. Although biometric systems provide more security over traditional methods, it should be
mentioned that these systems also have their own limitations. Some of the challenges commonly encountered by
biometric systems are: Noisy sensor data, Intra-class variations, and distinctiveness, non-universality, and spoof
attacks. These limitations can be either overcome or reduced by using multiple biometric traits. System that
consolidates multiple sources of biometric information is known as multimodal biometric system. This can be
accomplished by fusing, multiple traits of an individual, or multiple feature extraction and matching algorithms
operating on the same biometric. Biometric fusion can be performed at four possible levels: fusion at the sensor
level, fusion at the feature level, fusion at the matching score level, and fusion at the decision level. Sensor level
fusion is quite rare because fusion at this level requires that the data obtained from the different biometric sensors
must be compatible. Fusion at the feature level is also not always possible because the feature sets used by
different biometric modalities may either be inaccessible or incompatible. Fusion at the decision level is too rigid
since only a limited amount of information is available. Therefore, integration at the matching score level is
generally preferred due to the presence of sufficient information content and the ease in accessing and combining
matching scores 33, 4). Fusion at the matching score level is utilized using the combination-based schemes. Face
and iris biometrics were used to generate multimodal fusion system. Scores generated from individual traits are
combined at matching score level using matcher weighting fusion scheme. False Accept Rate (FAR) and False
Reject Rate (FRR) are standard metrics used to determine the performance of a biometric system. These two
criteria are used together and can be set with different tolerances. For example with low security environments one
would tolerate a false acceptance rate, while minimizing the false rejection rate. However a low FAR brings a high
risk of false acceptances. For critical security applications on the other hand, the FAR must be minimized. As we
are dealing with the highest level of security in EMR system our purpose is to get the best performance that gives
the lowest FAR with acceptable FRR. The performance of face recognition is affected by illumination, pose, or
facial expression; while the performance of iris recognition is affected by occlusion, motion, or poor focus; all of
which have negative effect on image quality. To reach our goal we have to reduce the effect of the above
mentioned factors by choosing the optimum number of images in training set and choose the proper images to be
used in training set that give the best performance in each unimodal subsystem. MATLAB is used as the
development tool (5), and emphasis will only be on the software for performing recognition, and not hardware for
capturing face and iris images.

2. METHODOLOGY

We choose face and iris biometrics as unimodal biometric systems. The iris, as one of the most accurate
biometrics is chosen because of its advantages over other biometrics for verification systems. Second, face is
chosen being the most natural, friendly, easy to get, and acceptable in identity authentication. At present, there is
no public multimodal database that includes both face image and iris image for same person. Due to the
independence of face and iris biometric traits, to each face image we assign an arbitrary (but fixed) iris image.
Experiments have been done on Olivetti Research Laboratory (ORL) database (6) for face acquisition and Chinese
Academy of Sciences Institute of Automation (CASIA) database (7) for iris. The multimodal system design is based
on fusion at matching score level. Before fusion, Min-Max normalization technique is performed to transform scores
from different biometrics into a common domain. The Matcher Weighting Fusion Scheme is used to combine iris
and face matching scores after normalization. Several experiments are deduced to determine the best parameters
that may be used in the multimodal system, and included: 1) Changing the number of images in training sets, and
2) Changing the image quality of each individual used in the training and testing sets. The performance evaluation
of unimodal and multimodal biometric identification systems is described by: 1) Receiver Operating Characteristic
(ROC) curve is drawn for visual depiction of the performance, 2) Recognition Rate calculation for numerical
depiction of the performance as a single value, and 3) Decidability index (d`) to measure how well a system can
discriminate between genuine and imposter distributions. The larger the d', the better is the performance.

Hierarchical Architecture EMR Security System
Users of healthcare systems differ according to their responsibility. Each user is restricted to access
particular level of data according to the role which is assigned to him. This role is dedicated by his job description.
Health institutions have pre-set job description for each health worker depending on the qualification. This job
description which decides in which level of security each health worker fits. Our EMR security system will
distinguish different levels of security: the basic level that has the lowest security, the medium level that has
medium security, and the advanced level that has the highest security. As the advanced level is the highest
security level where sensitive information is stored, it should only be access by authorized personnel. Our proposal
is to apply multimodal biometric identification system to this advanced level of security. Depending on the
increasing level of security, the authentication approach has increased in sensitivity and complexity too. The
presented model aims to improve the performance of biometric-based authentication system using multimodal
biometric system to overcome some of the limitations when using unimodal biometric-based authentication. By
applying the multimodal biometric-based authentication on the highest authentication level of electronic medical
record there will be: Decrease the total error rate (the false accept rate and the false reject rate), Decrease the
spoof attacks on the biometric system, and Increase the population coverage and reduce the enrollment failure
rate. Choosing the face and iris recognition systems is based on the advantages of each system. However, any of
them used alone still has disadvantages and limitations. Face accuracy is affected by expression, pose, and
illumination. Likewise, iris accuracy may be affected by eye disease, poor quality images at the time of acquisition,

B a k u , A z e r b a i j a n | 13

INTERNATIONAL JOURNAL Of ACADEMIC RESEARCH Vol. 3. No. 1. January, 2011, Part I
and by un-cooperative client. Fusion of face and iris is a good method to augment the advantages of both while
reducing their limitations.

Unimodal subsystems

Enrolment mode: The use of biometrics for identification starts with user enrollment, where all users
biometric features (face and iris) should be acquired and stored in a database as template file for future use.
Face enrolment: The eigenfaces approach as described by Turk and Pentland for face recognition is
utilized (8). Face images are decomposed into small set of characteristic feature images called eigenfaces which
used to represent both existing and new faces. Matlab software development tool (5) is used to program the face
enrolment mode where part of ORL DB (6) is utilized as training set that is normalized and stored in the face DB.
The following are the steps conducted in the face enrolment mode:
1. The acquired initial set of face images (Training Set) consists of Mt images. Each image is of gray scale
of size 112 X 92 pixels.
2. The images are normalized by converting each image matrix of size (112 x 92) pixels to the image
vector Ii of size (10304 X 1) where 10304 is 112 X 92. The training set matrix I is the set of image vectors with
size 10304 X Mt as given by:
Training set I = [I1 I2 .. IMt] (1)
3. The mean face( ) is the arithmetic average vector of (10304 X 1), as given by:
=
1
M
t

I
M
t
I=1
(2).
4. The deviation vector for each image i of size ( 10304 X 1 ) as given by:
i = Ii , i = 1,2,Mt (3)
A difference matrix A=[ 1, 2, . Mt ] of size (10304 x Mt).
5. To calculate eigenfaces, we would first find the Covariance matrix C of the training image vectors, as
given by:
C10304X10304 = A10304XMt X A
T
MtX10304 (4)
The dimension of the matrix is very large which causes computational complexity, we consider matrix L of
size (Mt X Mt) which give the same effect with reduced dimension, as given by:
L= A
T
A (5)
6. The eigenvectors of C ( Matrix U ) can be obtained by using the eigenvectors of L
( Matrix V ) as given by:
Ui 10304XMt = AVi MtXMt (6)
These Ui vectors will constitute the columns of the eigenfaces as given by:
Eigenface = [U1, U2, U3,.... UMt] (7)
7. The eigenvectors are ordered by the highest (largest) eigenvalues (descending order). The largest
eigenvalues provide more information on the face variation than the ones with smaller eigenvalues.
8. Instead of using Mt eigenfaces, we choose the highest m <= Mt as the eigenspace. The m value was at
value of 50% of the total number of eigenvectors.
9. All the images from training set are projected into this eigenfaces space obtaining the weight of each
eigenvector i to represent the image in the eigenface space, as given by:
i = 0
I
T
. (I- ) , i=1,2,, m (8)
This is the projection of a training image on each eigenvectors
As the projection on the eigenfaces space describes the variation of face distribution, it is possible to use
these new face descriptors to classify them. Equation 9 is the representation of the training image in the eigenface
space that gives the weight vector and its size is (m x 1).
weight matrix = [1, 2 . m ]
T
(9)
In order to increase the robustness to minor changes in expression, illumination and slight variations of view
angles, we take more than one image per individual. We formed a class of images for each individual and this class
is considered as the representative image vector of that class. In our system we have 30 persons each person has
xi (either 2 or 4) training images in the database. The average of the projections

of each person is the mean of


all the projected image vectors in that class, as given by:
Average class projection

=
1
x

x
=1
(10)
This is calculated for each class, and the averages are stored in the face database.
The above mentioned procedures are performed periodically whenever there is a free excess computational
capacity. The stored data is then utilized in the identification mode, decreasing execution time thereby increasing
the performance of the entire system.

Iris enrolment: The approach described by Daugman algorithm for iris Recognition is utilized (9) The iris
enrolment mode was conducted via the Matlab program implemented by Libor Masek (10, 11). Part of the CASIA
DB is utilized as a training set that is stored in the iris DB. The procedure for iris enrolment mode consists of four
steps:
1. Iris image acquisition: In our proposed system, this phase is out stepped and not been
implemented. Instead, it is replaced by large databases of iris images (CASIA) (7). Part of these databases is
utilized as a training set.
2. Iris segmentation: The first step after acquisition is to extract the iris image from the input eye
image. The iris area is considered as a circular crown limited by two circles. The iris inner (pupillary) and outer
(scleric) circles are detected.


14 | www.ijar.lit.az

INTERNATIONAL JOURNAL Of ACADEMIC RESEARCH Vol. 3. No. 1. January, 2011, Part I

3. Iris normalization: Iris segmentation results may appear at different positions and scales and
thus, require normalization. Normalization process maps the circular iris image into a rectangular
representation.
4. Feature extraction and encoding: Once the iris texture is available, features are extracted to
generate the biometric template.

Identification Mode

Face Subsystem: This process could be detailed via three phases:
Phase I: Preprocessing: When a healthcare user face image (T) is given as input to check for face identity, a
preprocessing phase is done as mentioned in face enrollment mode to produce the weight vector .
Phase II: Similarity Matching: The image vector is compared with each face class average i that
represents face keys for each class images previously obtained in the face enrolment mode and stored in the face
database. The Euclidean distance i (8) is used to find out the distance between two face keys vectors and is given
by:
i =


I
= (
K

IK
)
M
t
k=1
(11)
Where i is the Euclidean distance between the image vector and i
th
face class, is the image vector, i is
the average vector of i
th
face class, and Mt is the number of face images. The smallest distance is considered to be
the face match (FM) score result.
Phase III: Score Normalization: To ensure a meaningful combination between face and iris scores, the
scores must be transformed to a common domain. Various techniques have been proposed in the literature to
normalize the matching score of biometric systems. It has been found that the Min-Max and z-score normalization
techniques, generally outperform other techniques (12). Accordingly, in this thesis, we experimented with the use of
the Min-Max normalization formula, as given by:
SFn =
FM - mIn (FM
t
)
max(FM
t
)- mIn (FM
t
)
(12)
Where: SFn is the normalized score, FMt is the score set of all face matching scores of training set, FM is
the face score match prior to normalization, and Max(FMt) and Min(FMt) specify the end points of FMt.
This matching score SFn is used as input for the fusion module where the final matching score is generated.

Iris subsystem: Matching: The identification procedure for iris will follow the same algorithm as that of the
enrollment procedure. Then a comparison between the requesting healthcare user template and all templates in
the iris database will be performed to obtain the least dissimilarity score which will be considered as the iris
matching (IM) score. This computational step is performed using Hamming Distance (HD). The iris matching score
is used as the second input for the fusion module where the final matching score is generated.

Multimodal Fusion System: Multi-
modal systems can perform fusion in one
of three different modes; serial, parallel, or
hierarchical mode (13). The presented
multimodal system works in parallel mode
where all the traits are input at the same
time and the multimodal system uses all
the scores together to make a decision.
When a healthcare user tries to access the
highest level of the database, his or her
face and iris images are acquired. The
processing of these images, up until
fusion, is carried out separately in the face
recognition and the iris recognition
subsystems. After normalization, face and
iris matching scores are combined into a
single matching score (scalar). This score
is compared to a threshold to make the
final decision about whether to accept or
reject a healthcare user. This threshold can be adjusted such that the performance of the system will meet the
requirements of the domain. If the healthcare user is identified the system will allow him to access this highest level
of database. The steps involved during the proposed multimodal fusion system are shown in Figure 1.

Match Scores Fusion
Fusion at The matching score level is implemented, as it offers the best trade-off in terms of the information
content and the ease in fusion than other levels. The normalized face score SFn is combined with iris match score
(IM) using matcher weighting fusion scheme (13, 14) as given by:
F= (1- W)SFn + (W)IM (13)
Where: F is the total weighted fusion score, W is a weighting factor where 0 < W < 1 we choose it between
0.1 and 0.9, SFn is the normalized face matching score, and IM is the iris matching score.
The performance of classifiers is different, so it is necessary to exercise different weights to combine
individual classifier, i.e. face and iris. In order to calculate the optimum fusion weighting factor W, we performed
several trails where W is varied from 0.1 to 0.9 using step length of 0.1. When face weight changes from 0.1 to 0.9,
iris weight changes from 0.9 to 0.1 to give a total fusion weighting factor of 1.0. In order to calculate the optimum
Fig. 1.

B a k u , A z e r b a i j a n | 15

INTERNATIONAL JOURNAL Of ACADEMIC RESEARCH Vol. 3. No. 1. January, 2011, Part I
threshold for each weight, we perform several trials with different thresholds. The overall performance in each step
is then evaluated using the Receiver Operating Characteristic (ROC) curve (15) and the optimum weighting factor
is defined as the factor at which the weight gives the highest performance.

Threshold comparison and decision
The acceptance or rejection of a healthcare user is dependent on the match score (weighted fusion score F)
falling above or below the threshold. The threshold is adjustable so that the biometric system can be more or less
strict, depending on the requirements of the biometric application. The optimum threshold for each weight that
gives minimum FAR and acceptable FRR is calculated. Several trials with 0.02 increments from 0.15 to 0. 50 were
performed. These intervals have the most effective changing in FAR and FRR rates. The overall performance in
each step is then evaluated using the ROC curve. This process is repeated for each weight and the overall
performance for each weight is evaluated to choose the best weight. When weighted fusion score (F) is less than
the preset threshold then the healthcare user is identified, and thus authorized to the system, otherwise the user
access is denied.

3. EXPERIMENTS

Database Description
The ORL face set contains 10 different images of 40 distinct persons. For some persons, the images were
taken at different times, varying lighting slightly, and facial expressions. The size of each image is 92 x 112 (width x
height), 8-bit gray levels. The second DB is CASIA iris database originated from the National Laboratory of Pattern
Recognition (NLRP). This database (Version 1.0) includes 756 iris images from 108 eye persons. The images are
grayscale with a pixel resolution of 320 280 (a 320x280 pixel photograph of the eye taken from 4 centimeters
away using a near infrared camera).
7 face images for each class in ORL face database are chosen randomly out of 10; hence 280 face images
are used. From CASIA iris database, 40 eye classes are selected, 7 iris images for each, hence 280 iris images are
used. The databases are divided into two parts; clients and impostors. A total of 30 persons were selected to act as
clients and the remaining ten persons as imposters. When a person acted as client, several samples for each
person are chosen as training samples, and three samples are chosen as testing samples. When a person acted
as imposter, 14 samples (7 for face and 7 for iris) were used for testing.

Experiment Description
Experiments are conducted on three biometric systems, namely, face, iris, and a multimodal system using
face and iris fusion. Several experiments are conducted for the individual unimodal system (face and iris,
separately) to decide on: the optimum number of training and test images to be used, and the best training sets for
each individual unimodal to be used. Based on these experiments results, the best from each unimodal subsystem
are utilized for the multimodal fusion system that is applied in highest security level in EMR security system. The
system performance is tested for face and iris separately, and for the proposed multimodal fusion system. Results
from multimodal fusion system are compared with the corresponding results from individual unimodal subsystems
to validate its effect on the security performance.
To perform this study two groups (with different images), two experiments in each (with different number of
images) making a total of four combination sets, are conducted on face and iris subsystems, and fusion system,
separately. The following details these experiments.

Changing the Number of Images in Training Sets
To demonstrate the effect of changing the number of images in training sets on the performance of the
system two combination sets (CS) on group 1 are formulated:
CS1 (1-2, 5-7): The first 2 samples are used in the enrollment phase; while the last 3 samples are used for
testing.
CS2 (1-4, 5-7): The first 4 samples are used in the enrollment phase; while the last 3 samples are used for
testing.

Changing the Quality of Images in Training and Testing Sets
This will form group 2. The same experiments as in group 1 is repeated but on different images for training
and testing sets to study the effect of image quality in relation to the number of training images on the overall
performance of the system.
CS3 (6-7, 1-3): Other 2 samples (the last 2) are used in the enrollment phase; while other 3 samples (the
first 3) are used for testing.
CS4 (4-7, 1-3): Other 4 samples (the last 4) are used in the enrollment phase; while other 3 samples (the
first 3) are used for testing.

4. RESULTS

Experiments were done at 2 levels. In level I experiments face and iris are tested individually. In level II
experiments the proposed multimodal system is tested.

Performance Analysis of Unimodal Subsystems
Face unimodal: Figure 2 is the ROC curve showing the performance of CS1 and CS2 for the face when
operated as unimodal subsystem in group 1. On the ROC curve, the higher the line is drawn, the grater the GAR is


16 | www.ijar.lit.az

INTERNATIONAL JOURNAL Of ACADEMIC RESEARCH Vol. 3. No. 1. January, 2011, Part I

and therefore the more the accurate is the system. There are different FAR intervals, each of them have a
corresponding GAR value. CS2 (1-4, 5-7) has the highest value of 44% GAR at 0% FAR.
Figure 3 shows the ROC curve of group 2. CS4 (4-7,1-3) has the highest value of 65% GAR at 0% FAR.

Fig.2.




















Fig.3.

Iris unimodal: Figure 4 Shows the ROC curve of iris in CS1 and CS2. On this graph CS2 (1-4, 5-7) has the
highest value of 50% GAR at 0% FAR. Figure 5 Shows the ROC curve of group 2. On this graph CS4 (4-7, 1-3) is
just higher by 1% than CS3.
These two CSs (CS2 and CS4) which have higher performance than other CSs are used on the fusion
multimodal biometric identification system. Next, the highest performance from CS2 or CS4 will be considered as
our fusion multimodal biometric identification system.


Fig. 4.

B a k u , A z e r b a i j a n | 17

INTERNATIONAL JOURNAL Of ACADEMIC RESEARCH Vol. 3. No. 1. January, 2011, Part I

Fig. 5.

In all experiments improvement in RR performance was noticed with increasing the number of training set
images with different percentage, as shown in Table 1 and Table 2. In group 1; iris 3.75% (95% to 98.75%), and
face 16.25% (52.5% to 68.75%). In group 2; iris 0.63% (95% to 95.63%), and face 3.75% (71.25% to 75%).
However, this improvement is not practically significant as for face unimodal the performance in group 2, with 2
images CS3, is better than in group 1 with 4 images CS2. Furthermore, in iris unimodal the performance in group 2,
with 4 images CS4, does not show significant improvement with increasing the number of images in training set
(only 0.63%) over CS3. This indicates that the number of images alone without including various image qualities
will not insure performance improvement. By this we mean, the overall group of images should be composed of
different image quality (from poor to best quality) to match the actual circumstances of the real user. The results of
this study showed that iris has better performance than face.

Table 1. Performance Comparison between Face and Iris for Group 1.


Combination Sets
d'
FAR 0%
GAR
RR
Face Iris Face Iris Face Iris
CS1 (1-2,5-7) 0.67 3.13 16% 91% 52.50% 95.0%
CS2 (1-4,5-7) 0.92 4.22 44% 98% 68.75% 98.75%


Table 2. Performance Comparison between Face and Iris for Group 2


Combination Sets
d'
FAR 0%
GAR
RR
Face Iris Face Iris Face Iris
CS3 (6-7,1-3) 1.25 3.05 49% 91% 71.25% 95%
CS4 (4-7,1-3) 1.69 3.58 56% 92% 75% 95.63%

Performance Analysis of Multimodal Fusion System
In CS2 the highest performance is achieved with iris weight (IW) of 0.8 and face weight (FW) of 0.2 with
99.38% GAR at 0 % FAR. While in CS4 the highest performance is achieved when iris weight (IW) is 0.8 and face
weight is 0.2 with 98.75% GAR at 0 % FAR, as shown in Table . Figure 6 shows the ROC curve for the
performance of CS 2 and CS4. Form the graph we notice that CS2 (1-4,5-7) has better performance than CS4.


Table 3. Performance Comparison of CS2 and CS4 in fusion system

Combination set d'
FAR 0 %
GAR
RR
CS2
(IW 0.8, FW 0.2)
4.62 99% 99.38%
CS4
(IW 0.8, FW 0.2)
4.14 98% 98.75%



18 | www.ijar.lit.az

INTERNATIONAL JOURNAL Of ACADEMIC RESEARCH Vol. 3. No. 1. January, 2011, Part I


Fig. 6.

Comparison between the Three Security Systems
1. Fusion of multiple biometrics at the score level produces new scores whose distributions for genuine and
impostor users exhibit a higher degree of separation than those produced by individual matchers. Multimodal
system has higher d' value than unimodal systems.
2. The best CS for fusion is found to be CS2 (1-4,5-7) with GAR of 99% , RR of 99.38 % at 0% FAR, and d'
equal to 4.62
3. Fusion of face and iris improved the system performance as shown in Table 4. In CS2 the Recognition
rate (RR) increases from 68.75% (face biometric) and 98.75% (iris biometric) to 99.38% (fusion system with CS2).
In CS4 the Recognition rate increases from 75% (face biometric) and 95.63% (iris biometric) to 98.75% (fusion
system with CS4). This is represented in Figure 7
4. We observed that multimodal biometric is a way to reduce the quality requirement of images.


Table 4. Performance Comparison between Face, Iris and Fusion


Combination Sets
d'
FAR 0%
GAR
RR
Face Iris Fusion Face Iris Fusion Face Iris Fusion
CS2 (1-4,5-7) 0.92 4.22 4.62 44% 98% 99% 68.75% 98.75% 99.38%
CS4 (4-7,1-3) 1.69 3.58 4.14 56% 92% 98% 75% 95.63% 98.75%

The best performance rates are highlighted


Fig. 7.

Critical Recognition Rate
Recognition rate is the function of balancing between false acceptance and rejection rates, and recognition
reaches the highest when FAR and FRR are almost equal. This is acceptable for civil applications, but when
dealing with high security systems FAR should be as low as possible with acceptable FRR.

B a k u , A z e r b a i j a n | 19

INTERNATIONAL JOURNAL Of ACADEMIC RESEARCH Vol. 3. No. 1. January, 2011, Part I
5. CONCLUSION

The presented Master thesis proposes the application of multimodal biometric security system to the highest
level of security in the hierarchical architecture of electronic medical record. It is proved that the combination of iris
and face unimodal into our proposed multimodal fusion system has higher performance than each unimodal
separately. We observed that multimodal biometric is a way to reduce the quality requirement of images.

REFERENCES

1. Barrows Randolph and Clayton Paul (1996) Privacy, Confidentiality, and Electronic Medical
Records, Journal of the American medical informatics association, vol. 3(2): 139-148.
2. Delac, K. and Grgic, M. (2004). A survey of biometric recognition methods, In Proceedings of 46
th

international symposium electronics in marine, ELMAR-2004, Zadar, Croatia, 184-193
3. Ross, Arun and Jain Anil (2003) Information Fusion in Biometrics, Pattern Recognition Letters, vol.
24 (13), 21152125.
4. Nandakumar K. (2008) Multibiometric Systems: Fusion Strategies and Template Security, Ph.D.
Thesis, Michigan State University, USA
5. Matlab, Version 7.0.0.19920 (R14), the MathWorks. Inc, Access date, June 22, 2010, from:
http://www.mathworks.com/products/matlab/
6. Olivetti & Oracle Research Laboratory (2002) The Olivetti & Oracle Research Laboratory Face
Database of Faces, Access date, June 21, 2010, from:
http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html.
7. Chinese Academy of Sciences Institute of Automation (2003) Database of 756 Greyscale Eye
Images, Version 1.0, Access date, June 21, 2010, from:
http://www.cbsr.ia.ac.cn/english/Databases.asp
8. Turk, M. and Pentland, A. (1991) Eigenfaces for recognition, Journal of Cognitive Neuroscience,
Vol. 3:7186
9. Daugman, J. (2009) How iris recognition works, Chapter 25 in: The essential guide to image
processing, by Bovik, A., Edited by: Elsevier BV
10. Masek, L. (2003) Recognition of Human Iris Patterns for Biometric Identification, Bachelor Thesis,
University of Western Australia, Australia
11. Masek, L. and Kovesi, P. (2003) MATLAB Source Code for a Biometric Identification System
Based on Iris Patterns, the School of Computer Science and Software Engineering, University of
Western Australia
12. Jain, A.; Nandakumar, K. and Ross, A. (2005) Score Normalization in Multimodal Biometric
Systems, Pattern Recognition, vol. 38(12), 22702285
13. Jain, A., Ross, A. and Prabhakar, S. (2004) An Introduction to Biometric Recognition, IEEE,
Transactions on Circuits and Systems for Video Technology, Special Issue on Image- and Video-
Based Biometrics, Section 8. Multimodal Biometric Systems, vol. 14(1): 4-20
14. Ramli, D.; Samad, S. and Hussain, A. (2009) An Adaptive Fusion using SVM based Audio
Reliability Estimation for Biometric Systems, in Proceedings of the World Congress on Engineering,
London, U.K, Vol. I: 99-104
15. Fawcett, T., (2006) An introduction to ROC analysis. Pattern Recognition Letters, Vol.27(8): 861-
874

Vous aimerez peut-être aussi