Vous êtes sur la page 1sur 13

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICSPART A: SYSTEMS AND HUMANS, VOL. 42, NO.

3, MAY 2012 571


A New Human Identication Method:
Sclera Recognition
Zhi Zhou, Student Member, IEEE, Eliza Yingzi Du, Senior Member, IEEE,
N. Luke Thomas, and Edward J. Delp, Fellow, IEEE
AbstractThe blood vessel structure of the sclera is unique
to each person, and it can be remotely obtained nonintrusively
in the visible wavelengths. Therefore, it is well suited for human
identication (ID). In this paper, we propose a new concept for
human ID: sclera recognition. This is a challenging research prob-
lem because images of sclera vessel patterns are often defocused
and/or saturated and, most importantly, the vessel structure in the
sclera is multilayered and has complex nonlinear deformations.
This paper has several contributions. First, we proposed the new
approach for human ID: sclera recognition. Second, we developed
a new method for sclera segmentation which works for both color
and grayscale images. Third, we designed a Gabor wavelet-based
sclera pattern enhancement method to emphasize and binarize
the sclera vessel patterns. Finally, we proposed a line-descriptor-
based feature extraction, registration, and matching method that
is illumination, scale, orientation, and deformation invariant and
can mitigate the multilayered deformation effects and tolerate
segmentation error. The experimental results show that sclera
recognition is a promising new biometrics for positive human ID.
Index TermsBiometrics, line descriptor, multilayered vessel
pattern recognition, sclera recognition, sclera segmentation.
I. INTRODUCTION
B
IOMETRICS is the use of physical, biological, and be-
havioral traits to identify and verify a persons identity
automatically. There are many different traits that can be used
as biometrics, including ngerprint, face, iris, retina, gait, and
voice [1][13]. Each biometric has its own advantages and
disadvantages [2], [3], [13][15]. Table I is the comparison
of the different biometrics using the following objective mea-
sures: accuracy [2], [16], reliability [17], stability [3], [18],
identication (ID) [19], ID capability in a distance [19], user
cooperation [18], and scalability to a large population [16].
For instance, face recognition is the natural way that humans
identify a person, but peoples faces could change dramatically
over years and this change could affect recognition accuracy
[4][7]. The ngerprint pattern is very stable over a persons
Manuscript received June 16, 2010; revised October 11, 2010 and March 22,
2011; accepted August 6, 2011. Date of publication November 1, 2011; date of
current version April 13, 2012. This work was supported by the Ofce of Naval
Research under Award N00014-07-1-0788. This paper was recommended by
Associate Editor R. van Paassen.
Z. Zhou, E. Y. Du, and N. L. Thomas are with the Biometrics and Pattern
Recognition Laboratory, Department of Electrical and Computer Engineering,
Indiana UniversityPurdue University Indianapolis, Indianapolis, IN 46202
USA (e-mail: zhizhou@iupui.edu; yidu@iupui.edu; nlulthom@iupui.edu).
E. J. Delp is with the School of Electrical and Computer Engineering, Purdue
University, West Lafayette, IN 47907 USA (e-mail: ace@ecn.purdue.edu).
Color versions of one or more of the gures in this paper are available online
at http://ieeexplore.ieee.org.
Digital Object Identier 10.1109/TSMCA.2011.2170416
life, and its recognition accuracy is high. However, ngerprint
recognition cannot be applied for ID at a distance [8], [9], [20].
Aside from these measures, different people may object to
certain methods for various reasons, including culture [21],
religion [22], hygiene [23], medical condition [24], personal
preference [25], etc. For example, in some cultures or religions,
acquiring facial image(s) may make some users uncomfortable
[26]. Fingerprints may cause some hygiene issues and public
health concerns since it is a contact-based biometric [27].
In addition, in real-life applications, some biometrics may be
more applicable than others in certain scenarios. For example,
in general, the accuracy of iris or ngerprint recognition is
higher than facial recognition. However, in a video surveillance
application, facial recognition may be more preferable since
it can be integrated into the existing surveillance systems. To
achieve high accuracy, iris recognition needs to be performed
in the near-infrared (NIR) spectrum [10], [11], which requires
additional NIR illuminators. This makes it very challenging to
perform remote iris recognition in real-life scenarios [11].
Overall, no biometric is perfect or can be applied universally.
In order to increase population coverage, extend the range of
environmental conditions, improve resilience to spoong, and
achieve higher recognition accuracy, multimodal biometrics has
been used to combine the advantages of multiple biometrics
[28][30]. Researchers are trying to nd new biometrics to
provide more options for human ID [31]. Sclera can be acquired
at a distance under visible wavelength illumination. In this
paper, we propose a new human ID method: sclera recogni-
tion. Our experimental results show that sclera recognition can
achieve comparable recognition accuracy to iris recognition in
the visible wavelengths.
This paper is organized as follows. Section II covers the
background of sclera and sclera vessel patterns. In Section III,
we proposed an automatic segmentation approach both in color
and grayscale images. In Section IV, Gabor lter and adaptive
thresholding methods are used to remove illumination effects,
which ensure that the proposed method is illumination invari-
ant. In Section V, we designed the line descriptor method that
can extract patterns at different orientations, which made it
possible to achieve orientation-invariant matching. In addition,
we use the iris center as a reference when generating the line
descriptors, allowing for translation-invariant matching. We
designed the sclera template registration step (Section VI-A)
to ensure global translation, orientation, and scaling invari-
ances and the sclera template matching step (Section VI-B)
to ensure local translation, rotation, and scaling invariances. In
Section VII, we present our experimental results, and then, we
draw our conclusions in Section VIII.
1083-4427/$26.00 2011 IEEE
572 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICSPART A: SYSTEMS AND HUMANS, VOL. 42, NO. 3, MAY 2012
TABLE I
MAIN BIOMETRICS PROPERTIES
1
Fig. 1. Structures of the eye and sclera region.
II. BACKGROUND
The sclera is the white and opaque outer protective covering
of the eye (Fig. 1). The sclera completely surrounds the eye
and is made up of four layers of tissuethe episclera, stroma,
lamina fusca, and endothelium [32], [33]. The structure of
the blood vessels is visible and stable over time. It is formed
randomly for each person [34], [35]. With increasing age,
collagen and elastic bers deteriorate, glycosaminoglycan loss
and sclera dehydration occur, and lipids and calcium salts
accumulate, but the blood vessels do not deteriorate [34][37].
As a result, the blood vessel patterns would be unique with
both genetic and developmental components determining their
structure [32][37]. Moreover, sclera vessel patterns can be
obtained nonintrusively using visible wavelength illumination.
Therefore, the vessel patterns in the sclera can be used for
human ID in a distance under visible wavelength illuminations.
However, how do we performsclera recognition? Can it achieve
high recognition accuracy?
A few researchers have used the rst layer of the sclera blood
vessel patterns for recognition, i.e., conjunctival vascular pat-
tern recognition [38][41]. The conjunctiva is a clear mucous
membrane, made up of epithelial tissue, and consists of cells
and an underlying basement membrane that cover the sclera
and line the inside of the eyelids [42]. However, in an image,
it is very hard to separate the conjunctival vasculature from
the remaining sclera vessel patterns. As a result, approaching
the multilayered pattern as a single-layered pattern could not
generate satisfying results (Table II) [38][41].
Sclera vessel patterns are formed from several layers [32],
[33] and move nonlinearly as the eye moves (Fig. 2). Fig. 2(a)
and (b) were acquired in a sequence of video imagery of an
eye within 1 s. The upper eyelid in Fig. 2(b) is slightly more
open, as compared with that in Fig. 2(a). In the zoomed-in
view of the sclera patterns, the vascular pattern is made up
of multiple layers that are moving relatively independently.
TABLE II
CONJUNCTIVAL VASCULAR PATTERN RECOGNITION
Fig. 2. Example of layered nonlinear deformations in multiple images of the
same eye. In particular, note the areas as denoted by the arrows. Both images
were acquired in a video sequence within 1 s.
Specically, the curvy vessel Y (with points A
1
, B
1
, and
C
1
) and the smooth vessels X
1
(with points B
2
and C
2
) and
X
2
(with point A
2
) have different relative positions in the two
images. The smooth vessels X
1
and X
2
stay relatively similar
in position but their orientation shifts toward the lower right
portion of the image, whereas the curvy vessel Y shifts down
in Fig. 2(b) but retains its orientation. Also, the vessel structure
(where the vessels cross and relate to each other) around points
A
1
and A
2
is signicantly different between the two images.
Points B
1
and B
2
are separate in the top image but are touching
in the bottom image. Again, this is due to the transition of
the curvy vessel Y with respect to the straight vessel X
1
.
As a result, traditional vessel recognition methods would not
work well for sclera recognition, and it is more challenging to
perform sclera vascular pattern recognition.
To further discuss how multitudes of sclera vessel patterns
emerge from the interaction of two independent vessel layers,
we synthesize two representative vessel patterns m and n from
vessel layers X and Y, respectively. Then, in the middle of
Fig. 3, we present some of the myriad of individual layer defor-
mations that these patterns can exhibit. The combination of the
two patterns m and n due to the individual layer deformations
can result in many different observable nonlinear deformations.
Note particularly that the overall structure of the emergent pat-
tern (crossing points, relation between vessel landmarks, etc.)
can signicantly change with the multiple layers interactions.
In this paper, we propose an illumination-, orientation-,
translation-, and deformation-invariant line-descriptor-based
ZHOU et al.: SCLERA RECOGNITION 573
Fig. 3. Example of how different patterns can emerge from multiple indepen-
dent layers.
Fig. 4. Proposed sclera recognition system.
sclera recognition. It is composed of four modulessclera
segmentation, sclera vessel feature extraction, sclera vessel
feature matching, and matching decision (Fig. 4).
III. SCLERA SEGMENTATION
Segmentation is the rst step in sclera recognition. Many
researchers have worked on the segmentation of the pupil and
iris boundaries for iris recognition [10], [11], [43][62] in
the NIR wavelengths. However, in these approaches, sclera
information is often discarded in iris recognition. In [63][65],
Proenca et al. proposed segmentation algorithms for iris images
in the visible wavelengths using the UBIRIS database. How-
ever, these approaches are designed for iris segmentation and
are therefore not veried suitable for sclera recognition. In [38],
Derakhshani et al. applied contrast limited adaptive histogram
equalization to enhance the green color plane of the RGB
image and a multiscale region growing approach to identify
the sclera vessels from the image background but used manual
segmentation and registration. In [41], Crihalmeanu et al.
presented a semiautomated system for sclera segmentation.
In this paper, we propose a fully automatic sclera segmenta-
tion method for both color and grayscale images. The block di-
agram of the segmentation algorithm is shown in Fig. 5, which
includes estimation of the glare area, iris boundary detection,
estimation of the sclera region in color or grayscale images,
and eyelid and iris boundary detection and renement in the
sclera region estimation step. There is a difference between
color and grayscale images: The sclera region in a color image
is estimated using the best representation between two color-
based techniques, whereas the sclera region in a grayscale
image is extracted by Otsus threshold method [66][69].
A. Estimation of Glare Area
The glare area is usually a small bright area of the iris image.
Glare inside the pupil or nearby the pupil area can be modeled
as a bright object on a much darker background with sharp
edges. However, in some situations, there could be multiple
areas with very bright illumination and unwanted glare areas
(glares that are not inside the iris or pupil). For example, in
Fig. 6(a) and (b), there are glares on the surface of the cornea
which create challenges for glare detection. A Sobel lter is
rst applied to highlight desired glare areas (Fig. 6). For the
glares in the sclera or skin areas, the local background is often
brighter than the pupil or iris. Using the Sobel lter, it will not
stand out as much as glare in the desired area. Note that the
glare detection method is applied in grayscale images. If the
original image is a color image, a grayscale transformation is
applied rst (Fig. 6).
B. Iris Boundary Detection
In this paper, we focus on sclera recognition using frontal-
looking eyes. To improve the segmentation speed, the pupil
and iris regions are modeled as circular boundaries and typical
circular iris segmentation methods were used [10], [44], [48],
[51], [52], [56], [59], [70], [71]. Here, the pupil and iris re-
gions are segmented using a greedy angular search [72], which
is performed on the edge-detected image and can accurately
detect the pupil boundaries regardless of gaze direction and
eyelid/eyelash occlusion. The algorithm searches along the
radial direction at a predened set of angles to estimate the
pupil boundaries and then iteratively maps the highest edge
value along the angular direction for /2 radians for each of
these starting angles (Fig. 7).
Starting at the estimated center of the pupil, the algorithm
searches along a radial direction for the highest edge value
within some radial length range
(u, v) = arg {(x, y)|maxS(x, y),
with arctan
_
y y
0
x x
0
_
=
_
(3.1)
where S(x, y) is the edge detected image, (x
0
, y
0
) is the
estimated pupil center, and is the angular search direction.
Then, using this detected point as the start of the search, the
algorithm iteratively searches for the highest edge value along
the angular direction, constraining the possible outcomes to the
next pixel in the dened angular direction and its two nearest
neighbors along the radial dimension
(u, v)=arg

(x, y)|maxS(x, y),


x=x
0
+r cos
y =y
0
+r sin
r

1 r r

+1

(3.2)
where r

is the previous iterations radius. The search continues


for radians /2 and combines the aggregate results for all
initialization orientations. The nal result will be an image
with each pixels value equal to the number of individual radial
searches that include that particular pixel.
574 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICSPART A: SYSTEMS AND HUMANS, VOL. 42, NO. 3, MAY 2012
Fig. 5. Proposed sclera segmentation process.
Fig. 6. Glare detection approach. (a) Color. (b) Grayscale. (c) Convoluted
images.
Fig. 7. Iris boundary detection. (a) Finding the start point. (b) Searching along
the radial direction.
C. Estimation of Sclera Area
In the estimation of the sclera area, our sclera detection
approach uses either color or grayscale images.
1) Estimation of Sclera Area in Color Images: The sclera
is a nonskin white area in the eye, and two approaches were
used to nd potential sclera areas.
1) Nonskin area: The sclera area is the nonskin area of the
eye region. This allows for simple heuristics to be used
to classify areas in the image as skin or not-skin, as
described in [72] and [73], and then, a binary map of the
sclera is assumed to be the inverse of the skin. The rst
color distance map, for natural illumination, is calculatedas
CDM
1
=

1,
R > 95, G > 40, B > 20,
max(R, G, B)min(R, G, B) > 15
|RG| > 15, R > G, R > B
0, else.
(3.3)
The second color distance map, for ash illuminators, is
calculated as
CDM
2
=

1,
R>220, G>210, B>170,
max(R, G, B)min(R, G, B)>15
|RG| 15, R>B, B>G
0, else.
(3.4)
Then, the sclera map is calculated using the two color
distance maps
S
1
(x, y) =
_
1, CDM
1
(x, y) OR CDM
2
(x, y) = 0
0, else.
(3.5)
The parameters in (3.3) and (3.4) are implemented as
described in [72] and [73].
2) White area of the eye: The sclera area is white and
usually brighter than the remaining parts of the eye in an
image. In other words, the sclera area should have lowhue
(about bottom 1/3), low saturation (bottom 2/5), and high
intensity (top 2/3) in the HSV color space. Therefore, the
following heuristic is developed:
S
2
(x, y) =

1,
if H(x, y) th
h
and S(x, y) th
s
and V (x, y) th
v
0, else,
(3.6)
with the thresholds calculated as
th
h
=arg
_
t| min

x=1
p
h
(x) T
h

_
,
th
s
=arg
_
t| min

x=1
p
s
(x) T
s

_
,
and th
v
=arg
_
t| min

x=1
p
v
(x) T
v

_
. (3.7)
Here, p
h
(x) is the normalized histogram of the hue im-
age, p
s
(x) is the normalized histogram of the saturation
image, p
v
(x) is the normalized histogram of the value
image, and S
2
(x, y) is the binary sclera map. In this
way, we generate two binary maps S
1
(x, y) and S
2
(x, y).
The thresholds T
h
, T
s
, and T
v
are 1/3, 2/5, and 2/3,
respectively. Morphological operations are applied to the
two binary maps to remove isolated pixels and small
regions of contiguous pixels.
The convex hull of each of these representations is calcu-
lated. The convex hull is the minimal convex set of points that
contains the entire original set. The best estimate of the sclera is
determined by dividing each individual mask into two sections
around the detected pupil. The nal representation is created
using the individual portions that are the most homogenous by
minimizing the standard deviation of the pixels in the region
r = arg

i| min

(x,y)S
i
(I(x, y) m
i
)
2

(3.8)
where r is the region to be retained, S
i
is the ith region, I(x, y)
is the intensity image, and m
i
is the mean intensity of the ith
region. This process is shown in Fig. 8. Then, the convex hull
of the estimated region is calculated.
2) Estimation of Sclera Area in Grayscale Images: In
grayscale images, the skin tone approach [(3.3)(3.8)] for
color images would not work. We propose an Otsus method-
based sclera segmentation method. Otsus method [66] is a
ZHOU et al.: SCLERA RECOGNITION 575
Fig. 8. Sclera representation fusion.
Fig. 9. Process of sclera area detection.
Linear Discriminant Analysis-based thresholding method [74].
It assumes that there are two classes in an image, foreground
(object) and background, which can be separated into two
classes by intensity. Otsus method automatically searches for
the optimum threshold that can minimize the intraclass variance
while maximizing the between-class distance.
The process of sclera area detection has the following steps
(Fig. 9): the region of interest (ROI) selection step, the Otsus
method-based thresholding step [66], [69], and the sclera area
detection step. The left and right ROIs are selected based on the
iris center and boundaries. The height of the ROI is the diameter
of the iris, and the length of the ROI is the distance between the
limbic boundary and the margin of the image. Otsus method is
applied to the ROIs to obtain potential sclera areas. The correct
left sclera area should be located in the right and center sides,
and the correct right sclera area should be located in the left and
center. This way, we eliminate nonsclera areas. Fig. 9 shows the
process for detecting the left sclera area. The same approach is
applied to detect the right sclera area.
D. Iris and Eyelid Detection and Renement
The top and bottom boundaries of the sclera region are used
as initial estimates of the sclera boundaries, and a polynomial is
t to each boundary. Using the top and bottom portions of the
estimated sclera region as guidelines, the upper eyelid, lower
eyelid, and iris boundaries are then rened using the Fourier
active contour method [44]. Fig. 10 shows an example of two
segmented sclera imagesnote that some areas are not per-
fectly segmented. In reality, perfect segmentation of all images
is impossible. Therefore, the feature extraction and matching
steps of the system need to be tolerant of segmentation error.
Fig. 10. Example of segmented sclera images. (a) Segmented sclera color
image. (b) Segmented sclera grayscale image.
Fig. 11. Example image of a Gabor lter bank with four directions. The top
image is an even lter bank, and the bottom is an odd lter bank.
IV. SCLERA VESSEL PATTERN ENHANCEMENT
The segmented sclera area is highly reective. As a result,
the sclera vascular patterns are often blurry and/or have very
low contrast. To mitigate the illumination effect to achieve an
illumination-invariant process, it is important to enhance the
vascular patterns. In [75], Daugman shows that the family of
Gabor lters are good approximations of the vision processes of
the primary visual cortex. Because the vascular patterns could
have multiple orientations, in this paper, a bank of directional
Gabor lters (Fig. 11) is used for vascular pattern enhancement
G(x, y, , s) = e

_
(xx
0
)
2
+(yy
0
)
2
s
2
_
e
2i(cos (xx
0
)+sin(yy
0
))
(4.1)
where (x
0
, y
0
) is the center frequency of the lter, s is the
variance of the Gaussian, and is the angle of the sinusoidal
modulation. For this paper, only the even lter was used for
feature extraction of the vessels, since the even lter is sym-
metric and its response was determined to identify the locations
of vessels adequately.
The image is rst ltered with Gabor lters with different
orientations and scales
I
F
(x, y, , s) = I(x, y) G(x, y, , s) (4.2)
576 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICSPART A: SYSTEMS AND HUMANS, VOL. 42, NO. 3, MAY 2012
Fig. 12. Vessel patternsbefore and after Gabor enhancement. (a) Segmented
sclera region. (b) After Gabor enhancement (vessel-boosted image). (c) After
thresholding (binary vessel image). (d) After morphological operations.
where I(x, y) is the original intensity image, G(x, y, , s) is
the Gabor lter, and I
F
(x, y, , s) is the Gabor-ltered image
at orientation and scale s. Both and s are determined by
the desired features to be extracted in the database being used.
All the ltered images are fused together to generate the vessel-
boosted image F(x, y)
F(x, y) =
_

sS
(I
F
(x, y, , s))
2
. (4.3)
In Fig. 12(a), the vessel structure in the sclera region is very
difcult to see; however, in Fig. 12(b), after Gabor enhance-
ment but before thresholding, the vessel structure is clearly
visible.
An adaptive threshold, based on the distribution of ltered
pixel values, is used to determine a threshold to binarize the
Gabor ltered image
B(x, y) =
_
1, F(x, y) > th
b
0, else
, (4.4)
th
b
=arg
_
t| min

x=1
p
edge
(x) T
B

_
(4.5)
where B(x, y) is the binary vessel mask image, F(x, y) is
the vessel-boosted image, and p
edge
(x) is the normalized his-
togram of the nonzero elements of F(x, y). In practice, the
zero elements of the ltered image are a signicant portion
of the image, and in general, the vascular patterns have higher
magnitude than the background. Therefore, T
B
is selected to be
1/3. Fig. 12(c) shows a representative result after thresholding.
Then, small simply connected regions in the binary mask image
are removed. This way, we ensured that the proposed method is
illumination invariant.
V. SCLERA FEATURE EXTRACTION
Depending on the physiological status of a person (for ex-
ample, fatigue or nonfatigue), the vascular patterns could have
different thicknesses at different times, because of the dilation
and constriction of the vessels. Therefore, vessel thickness is
not a stable pattern for recognition. In addition, some very thin
vascular patterns may not be visible at all times. In this paper,
binary morphological operations are used to thin the detected
vessel structure down to a single-pixel wide skeleton and to
remove the branch points. This leaves a set of single-pixel
wide lines that represents the vessel structure. Fig. 12(d) shows
the vessel skeleton after binary morphology. These lines are
then recursively parsed into smaller segments. The process is
repeated until the line segments are nearly linear with the lines
maximum size. For each segment, a least squares line is t to
each segment.
These line segments are then used to create a template
for the vessel structure. The segments are described by three
quantitiesthe segment angle to some reference angle at the
iris center, the segment distance to the iris center, and the dom-
inant angular orientation of the line segment. The template for
the sclera vessel structure is the set of all individual segments
descriptors. This implies that, while each segment descriptor
is of a xed length, the overall template size for a sclera vessel
structure varies with the number of individual segments. Fig. 13
shows a visual description of the line descriptor.
A descriptor is S = ( r )
T
. The individual components of
the line descriptor are calculated as
= tan
1
_
y
l
y
i
x
l
x
i
_
,
r =
_
(y
l
y
i
)
2
+ (x
l
x
i
)
2
,
and = tan
1
_
d
dx
f
line
(x)
_
. (5.1)
Here, f
line
(x) is the polynomial approximation of the line
segment, (x
l
, y
l
) is the center point of the line segment, (x
i
, y
i
)
is the center of the detected iris, and S is the line descriptor.
ZHOU et al.: SCLERA RECOGNITION 577
Fig. 13. Sketch of parameters of segment descriptor.
Additionally, the iris center (x
i
, y
i
) is stored with all of the
individual line descriptors. Using the iris center as a reference,
the line descriptor is translation invariant. The line descriptor
can extract patterns in different orientations, which makes it
possible to achieve orientation-invariant matching.
VI. SCLERA MATCHING
A. Sclera Template Registration
When acquiring the eye images, the eyelids can have dif-
ferent shapes, the iris location can vary, the pupil size can be
different, and the eye may be tilted with respect to the camera.
The camera-to-object distance and camera zoom can also vary.
All of these could affect the size, location, and patterns of the
acquired sclera region in the image. It is important to take these
variances into account in sclera matching. Therefore, the rst
step is to perform sclera ROI registration to achieve global
translation, orientation, and scaling invariances. In addition,
due to the complex deformation that can occur in the vessel
patterns, it is desirable to have a registration scheme that is ro-
bust and exhaustive but does not unduly introduce false accepts.
Most importantly, as we discussed in Section I, the sclera vas-
cular patterns deform nonlinearly with the movement of the eye
and eyelids and the contraction/dilation of the pupil. As a result,
the segments of the vascular patterns could move individually,
and this must be accounted for in the registration scheme.
We developed a new method based on a random sample
consensus (RANSAC)-type algorithm to estimate the best t
parameters for registration between the two sclera vascular
patterns. RANSAC is an iterative model-tting method that can
robustly t to a model, even given noise [76]. To limit potential
false accepts due to overtting, the patterns are registered as
a set of pointsthe centers of the line segments that make
up the template. The optimal registration used is the one that
minimizes the minimum distance between the templates. This
reduces articially introduced false accepts because it does not
register the patterns using the same parameters used for match-
ing; therefore, the optimal registration and optimal matching
can, and probably will, be different for templates that should
not match. For the registration algorithm, it randomly chooses
two pointsone from the test template and one from the target
template. It also randomly chooses a scaling factor and a rota-
tion value, based on a priori knowledge of the database. Using
these values, it calculates a tness value for the registration
using these parameters. The two descriptors S
xi
and S
yj
are
S
xi
=

xi
r
xi

xi

and S
yj
=

yj
r
yj

yj

. (6.1)
First, an offset vector is created using the shift offset and
randomly determined scale and angular offset values

0
=

x
o
y
o
s
o

(6.2)
where x
o
= r
xi
cos
xi
r
yj
cos
yj
and y
o
= r
xi
sin
xi

r
yj
sin
yj
.
The tness of two descriptors is the minimal summed pair-
wise distance between the two descriptors given some offset
vector
0
D(S
x
, S
y
) = arg min

D(S
x
, S
y
,
0
) (6.3)
where

D(S
x
, S
y
,
0
) =

x
i
Test
min Dist (f(S
xi
,
0
), S
y
) . (6.4)
Here, f(S
xi
,
0
) is the function that applies the registration
given the offset vector to a sclera line descriptor
f(S
xi
,
0
) =

cos
1
_
r
xi
cos
xi
+x
o
s
o
r
xi
_
r
xi
cos
xi
+x
o
cos(
xi
+
o
)

xi

. (6.5)
The minimum pairwise distance is calculated using
min Dist(S
xi
, S
y
) = arg min
j
{d(S
xi
, S
yj
)} (6.6)
with the distance between two points calculated using
d(S
xi
, S
yj
) =
_
(x
o
)
2
+ (y
o
)
2
(6.7)
where S
xi
is the rst descriptor used for registration, S
yj
is
the second descriptor,
0
is the set of offset parameter values,
f(S
xi
,
0
) is a function that modies the descriptor with the
given offset values, s is the scaling factor, and is the rotation
value. The algorithm performs a number of iterations, recording
the values
0
for those that are minimal in D(S
x
, S
y
). In this
way, we ensure that the registration process is globally scale,
orientation, and deformation invariant.
B. Sclera Template Matching
As discussed previously, it is important to design the match-
ing algorithm such that it is tolerant of segmentation errors.
In general, the edge areas of the sclera may not be segmented
accurately; therefore, the weighting image (Fig. 14) is created
fromthe sclera mask by setting interior pixels in the sclera mask
to 1, pixels within some distance of the boundary of the mask
to 0.5, and pixels outside the mask to 0. This allows a matching
value between two segments to be between 0 and 1 and allows
578 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICSPART A: SYSTEMS AND HUMANS, VOL. 42, NO. 3, MAY 2012
Fig. 14. Weighting image.
for weighting the matching results based on the segments that
are near the masks boundaries. This reduces the effect of
segmentation errors, particularly for under segmentation of the
boundary between the sclera and eyelids.
After the templates are registered, each line segment in the
test template is compared with the line segments in the target
template for matches
m(S
i
, S
j
) =

w(S
i
)w(S
j
),
d(S
i
, S
j
) D
match
and
|
i

j
|
match
0, else,
(6.8)
where S
i
and S
j
are two segment descriptors, m(S
i
, S
j
) is
the matching score between segments S
i
and S
j
, d(S
i
, S
j
) is
the Euclidean distance between the segment descriptor center
points [from (6.8)], D
match
is the matching distance threshold,
and
match
is the matching angle threshold. The matching
thresholds D
match
and
match
were both determined empiri-
cally to be 5 pixels and 10

, respectively. w(S
n
) is the weight
of the nth segment and is equal to 1, 0.5, or 0 if S
n
is in the
white, gray, or black areas of the mask, respectively.
If there is a nonzero matching score, the segments are re-
moved fromfuture comparisons (one fromthe test and one from
the target templates) and the matching result is recorded. The
total matching score M is the sum of the individual matching
scores divided by the maximum matching score for the minimal
set between the test and target templates, i.e., one of the test
or target templates has fewer points, and thus, the sum of its
descriptor weights sets the maximum score that can be attained
M =

(i,j)Matches
m(S
i
, S
j
)
min
_

iTest
w(S
i
),

jTarget
w(S
j
)
_. (6.9)
Here, Matches is the set of all pairs that are matching, Test is
the set of descriptors in the test template, and Target is the set
of descriptors in the target template.
The proposed matching scheme allows for a multitude of
potential changes in the vascular pattern and allows for multiple
independent vessel patterns to be matched. Additionally, it al-
lows for overlapping vessel patterns to be matched even as they
change independently, where matching schemes that retain and
use the crossing points of the patterns could be problematic in
this type of situation. In this way, we ensure that the matching
step is locally scale, orientation, and deformation invariant.
Fig. 15. Proposed sclera feature matching protocol.
VII. EXPERIMENTAL RESULTS
A. Experimental Methodology
In this paper, we adopted the Iris Challenge Evaluation
matching protocol [12] (proposed by the National Institute of
Standards and Technology) (Fig. 15). The proposed system
can only generate four possible recognition results: correctly
matching (true positive: TP), correctly not matching (true
negative: TN), incorrectly matching (false positive: FP), and
incorrectly not matching (false negative: FN) [11]. The False
Accept Rate (FAR), False Reject Rate (FRR), and Genuine
Acceptance Rate (GAR) are calculated by
FAR =
FP
TN +FP
100% (7.1)
FRR =
FN
TP +FN
100% (7.2)
and GAR =1 FRR. (7.3)
The receiver operating characteristic (ROC), a balanced plot
of FAR and GAR, or FAR and FRR, can be used to evaluate the
performance of the proposed system [77]. Moreover, since FAR
and FRR are in opposition to each other, when FAR = FRR, re-
ferred to as equal error rate (EER), it achieves the point which is
also widely used to compare accuracy rates of two ROC curves.
B. Experimental Result Using the UBIRIS database
The UBIRIS database [78] is a publicly available database
with iris images acquired in color, in comparison with most
iris databases which are acquired using NIR illumination. The
database consists of 1877 images (1214 in Session1 and 663
in Session2), composed of 241 users in two distinct sessions.
In Session 1, they tried to minimize noise factors, particularly
those relative to reections, luminosity, and contrast, having
installed the framework inside a dark room [78]. However,
in Session 2, they changed the capture location to introduce
a natural luminosity factor. This enabled the appearance of
heterogeneous images with respect to reections, contrast, lu-
minosity, and focus problems. Images collected at this stage
tend to simulate the ones captured by a vision system without
or with minimal active collaboration from the subjects [78]. In
other words, a signicant number of the images in Session 2
have very poor quality. In both sessions, the images are gener-
ally cropped such that the eye is predominately centered and the
eye region is well cropped in the images. Figs. 16 and 17 show
some example images from the UBIRIS database Sessions 1
and 2 separately. The top row shows good-quality images, and
the bottom row shows poor-quality images.
ZHOU et al.: SCLERA RECOGNITION 579
Fig. 16. Example images from the UBIRIS database, Session 1.
Fig. 17. Example images from the UBIRIS database, Session 2.
Fig. 18. Example matching results. (a) The sclera patterns from two images of
the same person are well matched. (b) The sclera patterns from two images of
different persons are not matched. Note that the red and blue patterns represent
sclera templates from two images. The green lines indicate matched pairs.
1) Example Matching Results: Two sample matching re-
sults are shown in Fig. 18. Two vessel templates are represented
by the blue and red lines, respectively, and the matching pairs
are connected with green lines. The color of the green lines
indicates the strength of the match, i.e., a brighter tone is a
stronger or more condent match and duller tones indicate
weaker matches.
Fig. 18(a) shows a matching result from two sclera images
of the same eye (which should match), and the bottom shows
the matching result from two sclera images from two different
eyes (which should not match). As the matching results on the
right of the image show, the matching results correspond with
the ground-truth results.
Fig. 19(a)(c) are the original images used in Fig. 18. The
images Fig. 19(a) and (b) are from the same user, and the image
in Fig. 19(c) is from a different user.
Fig. 19. Original images. (From left to right) (a) User 1, image 1. (b) User 1,
image 2. (c) User 2, image 1.
Fig. 20. Examples of poor-quality images.
TABLE III
COMPARISON OF EERS AND GARS FOR TWO SEGMENTATION METHODS
2) Overall Matching Results: In the UBIRIS database, 72
of 1877 images are removed (3.84% of the total number of
images) because of very poor quality (e.g., blur, blink, or no-
sclera-area image); hence, they cannot be segmented by our
proposed segmentation method. Examples of these 72 images
are shown in Fig. 20. In order to test our sclera segmentation
method, we compared the sclera recognition accuracy using
automatic segmentation with that using manual segmentation.
The algorithms in the proposed method are implemented in
Matlab (version 2010a) on a PC with Intel Core 2 Duo 2.4-GHz
processors and 4-GB DRAM. For the 800 600 color images
in the UBIRIS database, it takes 4.84 s for sclera segmentation,
6.73 s for feature extraction, and 2.2 s for one-to-one matching.
We are using GPU to reduce the speed. By using an Nvidia
GeForce GT440 GPU processor with 96 cores, the one-to-one
matching speed could be reduced to 0.02 s (i.e., the speed has
been reduced over 100 times).
Table III and Fig. 21 show the comparison results using both
automatic and manual segmentation methods. In Session 1, the
EER in automatic segmentation (4.09%) is just a little bit lower
than that in manual segmentation (3.70%). In addition, GAR
in automatic segmentation (90.71% at FAR = 0.1% and 83%
at FAR = 0.01%) is just a little bit lower than that in manual
segmentation (92.55% at FAR = 0.1% and 89.22% at FAR =
0.01%). In Session 2, since the quality of images is worse than
that in Session 1, the EER in automatic segmentation (9.98%)
is lower than that in manual segmentation (7.48%). However,
580 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICSPART A: SYSTEMS AND HUMANS, VOL. 42, NO. 3, MAY 2012
Fig. 21. ROC curves of two segmentation methods.
TABLE IV
COMPARISON OF EERS FOR DIFFERENT MATCHING MODALITIES
AND METHODS ON THE UBIRIS DATABASE
the GAR in automatic segmentation (85.59% at FAR = 0.1%
and 82.85% at FAR = 0.01%) can achieve a similar accuracy
to that in manual segmentation (85.49% at FAR = 0.1% and
82.58% at FAR = 0.01%). Some bad-quality images (Fig. 20)
could not be automatically segmented and were eliminated in
the automatic segmentation step. As a result, these images were
not used for recognition. We have to exclude these images for
recognition, even using manual segmentation.
3) Comparison of Sclera Recognition With Iris Recognition
in Visible Wavelength: The UBIRIS database was acquired
for iris recognition in the visible wavelengths. Therefore, it is
fair to use this database in comparing our sclera recognition
results to the iris recognition results under visible wavelength
reported by Proenca and Alexandre in [65]. For their system,
they manually selected 800 from UBIRIS images, partitioned
the iris into six separate regions, encoded the individual regions
using Daugmans method as outlined in [49], and utilized a
score fusion scheme to minimize the effect of noise (due to the
noncompliant nature of the data).
Table IV compares the EERs between two of their reported
methods and the proposed method to show the validity of
sclera vessel recognition in relation to a more established
biometric modality, which is iris recognition. To compare our
proposed system with their method fairly, we both manually
and randomly select 800 images from the UBIRIS database.
For the result from random selection, we took the average value
from ten random selections. Table IV shows that our proposed
sclera recognition method can achieve comparable accuracy
(EER = 1.34% and 3.83%) with that of the two iris recognition
methods using visible-light-acquired images (EER = 2.38%
and 3.72%). In particular, note that the iris patterns in dark eyes
are hard to extract under visible light illumination. Therefore,
these results show that sclera recognition could have some
advantage over iris recognition in the visible wavelengths.
Fig. 22. Some samples images in the IUPUI green-wavelength database.
C. IUPUI Multiwavelength Database
The IUPUI multiwavelength database is acquired under eight
different wavelength illuminations including purple (420 nm),
blue (470 nm), green (525 nm), yellow (590 nm), orange
(610 nm), red (630 nm), deep red (660 nm), and infrared
(820 nm) at a distance of 1 ft. In each wavelength, there is a total
of 352 images from 44 subjects in ve different colored eyes
including blue, dark brown, light brown, green, and hazel. For
each human subject, we obtained the image videos from both
eyes on two separate occasions. The time period between each
data acquisition is at least one week. From our observation, the
sclera pattern in the green wavelength can be extracted better
than any other wavelengths. Therefore, the green-wavelength
database was used for this work.
Fig. 22 shows the sample images in our IUPUI green-
wavelength database. The top row shows good-quality images,
the middle row shows images of mid to poor quality, and the
bottom row shows poor-quality images.
To benchmark the proposed method against iris recogni-
tion methods for this database, the following recognition ap-
proaches were used: 2-D Gabor wavelet method with Log-Polar
coordinates [10] and 1-D Log-Gabor wavelet method with
Polar coordinates [79]. Manual segmentation has also been
done in this database to compare the recognition accuracies
of the respective approaches. The computational complexities
of sclera segmentation, feature extraction, and matching are
o(n
2
). For the 1280 1024 grayscale images in the IUPUI
multiwavelength database, it takes 0.34 s for sclera segmen-
tation, 8.02 s for feature extraction, and 2.2 s for one-to-one
matching. The speed can be greatly reduced if GPU was used.
Table V and Fig. 23 show the comparison results in the IUPUI
multiwavelength database.
From Table V, in manual segmentation, the EER of the
proposed system (8.49%) is better than the Log-Gabor iris
recognition (13.64%) and is similar to 2-D Gabor iris recog-
nition (8.49%). Moreover, with the similar EERs, the GAR
of the proposed system is 32.1% higher than that of the 2-D
Gabor iris recognition at FAR = 0.1% and is 35.4% higher at
FAR = 0.01%. Even though the EER of the proposed system
using automatic segmentation (11.89%) is lower than that using
manual segmentation (8.49%) and 2-D Gabo iris recognition
(8.49%) using manual segmentation, it is still higher than
ZHOU et al.: SCLERA RECOGNITION 581
TABLE V
COMPARISON OF EERS AND GARS IN THE
IUPUI MULTIWAVELENGTH DATABASE
Fig. 23. ROC curve in the IUPUI multiwavelength database.
that of Log Gabor iris recognition (13.64%) using manual
segmentation.
VIII. CONCLUSION AND DISCUSSIONS
In this paper, we have proposed a new biometrics: sclera
recognition. Our research results show that sclera recognition
is very promising for positive human ID. Sclera provides a new
option for human ID. In this paper, we focused on frontal-
looking sclera image processing and recognition. Similar to
iris recognition, where off-angle iris image segmentation and
recognition is still a challenging research topic, off-angle sclera
image segmentation and recognition will be an interesting and
challenging research topic. In addition, sclera recognition can
be combined with other biometrics, such as iris recognition
or face recognition (such as 2-D face recognition) to perform
multimodal biometrics. Moreover, the effect of template aging
in sclera recognition will be studied in the future. Currently,
the proposed system is implemented in Matlab. The process-
ing speed can be dramatically reduced by parallel computing
approaches.
ACKNOWLEDGMENT
The authors would like to thank M. P. Beale for helping
with the paper review, the Department of Computer Science,
University of Beira Interior, for providing the UBIRIS database,
and the people who contributed their sclera data for IUPUI
multiwavelength database. This paper is a revision of our
conference paper [80].
REFERENCES
[1] J. Woodward, N. Orlans, and P. Higgins, Biometrics: Identity Assurance
in the Information Age. New York: McGraw-Hill, 2003.
[2] Y. Du, Biometrics: Technologies and Trend, in Encyclopedia of Optical
Engineering. New York: Marcel Dekker, 2006.
[3] Y. Du, Biometrics, in Handbook of Digital Human Modeling.
Mahwah, NJ: Lawrence Erlbaurm, 2008.
[4] D. Pearce and H. Hirsch, The Aurora Experimental Framework for the
Performance Evaluation of Speech Recognition Systems Under Noisy
Conditions, in Proc. ISCA ITRW ASR Automatic Speech Recognition:
Challenges for the Next Millennium, Paris, France, 2000.
[5] M. Turk and A. Pentland, Eigenfaces for recognition, J. Cognitive
Neurosci., vol. 3, no. 1, pp. 7186, 1991.
[6] G. Medioni, J. Choi, C.-H. Kuo, and D. Fidaleo, Identifying nonco-
operative subjects at a distance using face images and inferred three-
dimensional face models, IEEE Trans. Syst., Man, Cybern. A, Syst.,
Humans, vol. 39, no. 1, pp. 1224, Jan. 2009.
[7] M. De Marsico, M. Nappi, and D. Riccio, FARO: Face recognition
against occlusions and expression variations, IEEE Trans. Syst., Man,
Cybern. A, Syst., Humans, vol. 40, no. 1, pp. 121132, Jan. 2010.
[8] D. Maltoni, D. Maio, A. Jain, and S. Prabhakar, Handbook of Fingerprint
Recognition. New York : Springer-Verlag, 2009.
[9] M. Vatsa, R. Singh, and A. Noore, Unication of evidence-theoretic fu-
sion algorithms: A case study in level-2 and level-3 ngerprint features,
IEEE Trans. Syst., Man, Cybern. A, Syst., Humans, vol. 39, no. 1, pp. 47
56, Jan. 2009.
[10] J. Daugman, How IRIS recognition works, IEEE Trans. Circuits Syst.
Video Technol., vol. 14, no. 1, pp. 2130, Jan. 2004.
[11] Y. Du, Review of iris recognition: Cameras, systems, and their applica-
tions, Sens. Rev., vol. 26, no. 1, pp. 6669, 2006.
[12] P. J. Phillips, W. T. Scruggs, A. J. OToole, P. J. Flynn, K. W. Bowyer,
C. L. Schott, and M. Sharpe, FRVT 2006 and ICE 2006 large-scale
experimental results, IEEE Trans. Pattern Anal. Mach. Intell., vol. 32,
no. 5, pp. 831846, May 2010.
[13] F. Alonso-Fernandez, J. Fierrez, D. Ramos, and J. Gonzalez-Rodriguez,
Quality-based conditional processing in multi-biometrics: Application
to sensor interoperability, IEEE Trans. Syst., Man, Cybern. A, Syst.,
Humans, vol. 40, no. 6, pp. 11681179, Nov. 2010.
[14] K. W. Bowyer, Introduction to the special issue on recent advances in
biometrics, IEEE Trans. Syst., Man, Cybern. A, Syst., Humans, vol. 40,
no. 3, pp. 434436, May 2010.
[15] N. Poh, J. Kittler, and T. Bourlai, Quality-based score normalization with
device qualitative information for multimodal biometric fusion, IEEE
Trans. Syst., Man, Cybern. A, Syst., Humans, vol. 40, no. 3, pp. 539554,
May 2010.
[16] A. K. Jain, A. Ross, and S. Pankanti, Biometrics: A tool for information
security, IEEE Trans. Inf. Forensics Security, vol. 1, no. 2, pp. 125143,
Jun. 2006.
[17] K. Mahadevan, Estimating Reliability Impact of Biometric Devices in
Large Scale Applications, M.S. thesis, West Virginia Univ., Morgantown,
WV, 2003.
[18] A. K. Jain, A. Ross, and S. Prabhakar, An introduction to biometric
recognition, IEEE Trans. Circuits Syst. Video Technol., vol. 14, no. 1,
pp. 420, Jan. 2004.
[19] S. Crihalmeanu, A. Ross, S. Schuckers, and L. A. Hornak, A Protocol
for Multibiometric Data Acquisition, Storage and Dissemination, WVU,
Lane Dept. Comput. Sci. Elect. Eng., Morgantown, WV, Tech. Rep.,
2007.
[20] G. Bhatnagar, Q. M. J. Wu, and B. Raman, A new fractional random
wavelet transform for ngerprint security, IEEE Trans. Syst., Man, Cy-
bern., A, Syst., Humans, vol. 42, no. 1, pp. 262275, Jan. 2012.
[21] C. Riley, K. Buckner, G. Johnson, and D. Benyon, Culture &
biometrics: Regional differences in the perception of biometric au-
thentication technologies, AI & Soc., vol. 24, no. 3, pp. 295306,
Aug. 2009.
[22] F. H. Cleaver, A contribution to the biometric study of the human
mandible, Biometrika, vol. 29, no. 1/2, pp. 80112, 1937.
[23] M. Choras, Emerging methods of biometrics human identication, in
Proc. 2nd Int. Conf. ICICIC, Sep. 57, 2007, p. 365.
[24] W. Lingyu and G. Leedham, Near- and far-infrared imaging
for vein pattern biometrics, in Proc. IEEE Int. Conf. AVSS, Nov. 2006,
p. 52.
[25] M. I. D. Kotzin, Method and Apparatus Using Biometric Sensors for
Controlling Access to a Wireless Communication Device, U.S. Patent
7088220, Aug. 8, 2006.
582 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICSPART A: SYSTEMS AND HUMANS, VOL. 42, NO. 3, MAY 2012
[26] Y. Sidani, Women, work, and Islam in Arab societies, Women Manage.
Rev., vol. 20, no. 7, pp. 498512, 2005.
[27] P. Airey and J. Verran, A method for monitoring substratum hygiene
using a complex soil: The human ngerprint, in Fouling, Cleaning and
Disinfection in Food Processing. Cambridge, U.K.: Dept. Chem. Eng.,
Univ. Cambridge, Mar. 2022, 2006.
[28] C. Kyong, K. W. Bowyer, S. Sarkar, and B. Victor, Comparison and
combination of ear and face images in appearance-based biometrics,
IEEE Trans. Pattern Anal. Mach. Intell., vol. 25, no. 9, pp. 11601165,
Sep. 2003.
[29] J. Ortega-Garcia, J. Fierrez, F. Alonso-Fernandez, J. Galbally, M. R. Freire,
J. Gonzalez-Rodriguez, C. Garcia-Mateo, J.-L. Alba-Castro, E. Gonzalez-
Agulla, E. Otero-Muras, S. Garcia-Salicetti, L. Allano, B. Ly-Van,
B. Dorizzi, J. Kittler, T. Bourlai, N. Poh, F. Deravi, M. Ng, M. Fairhurst,
J. Hennebert, A. Humm, M. Tistarelli, L. Brodo, J. Richiardi, A. Drygajlo,
H. Ganster, F. M. Sukno, S.-K. Pavani, A. Frangi, L. Akarun, and
A. Savran, The multiscenario multienvironment BioSecure Multimodal
Database (BMDB), IEEE Trans. Pattern Anal. Mach. Intell., vol. 32,
no. 6, pp. 10971111, Jun. 2010.
[30] S. Ribaric and I. Fratric, A biometric identication system based on
eigenpalm and eigennger features, IEEE Trans. Pattern Anal. Mach.
Intell., vol. 27, no. 11, pp. 16981709, Nov. 2005.
[31] N. K. Ratha and V. Govindaraju, Advances in Biometrics: Sensors,
Algorithms and Systems. New York: Springer-Verlag, 2008.
[32] C. W. Oyster, The Human Eye: Structure and Function. Sunderland,
MA: Sinauer Assoc., 1999.
[33] P. Kaufman and A. Alm, Adlers Physiology of the Eye: Clinical Applica-
tion. St. Louis, MO: Mosby, 2003.
[34] R. Broekhuyse, The lipid composition of aging sclera and cornea, Int.
J. Ophthalmol., vol. 171, no. 1, pp. 8285, 1975.
[35] A. Kanai and H. Kaufman, Electron microscopic studies of the elastic
ber in human sclera, Investigative Ophthalmol. Vis. Sci., vol. 11, no. 10,
pp. 816821, Oct. 1972.
[36] S. Vannas and H. Teir, Observations on structures and age changes in
the human SCLERA, Acta Ophthalmol., vol. 38, no. 3, pp. 268279,
Jun. 1960.
[37] R. Weale, The Aging Eye: Hoeber Medical Division. New York: Harper
& Row, 1963.
[38] R. Derakhshani, A. Ross, and S. Crihalmeanu, A new biometric
modality based on conjunctival vasculature, in Proc. ANNIE, 2006,
pp. 18.
[39] M.-K. Hu, Visual pattern recognition by moment invariants, IRE Trans.
Inf. Theory, vol. 8, no. 2, pp. 179187, Feb. 1962.
[40] R. Derakhshani and A. Ross, A Texture-Based Neural Network Classier
for Biometric Identication Using Ocular Surface Vasculature, in Proc.
IJCNN, 2007, pp. 29822987.
[41] S. Crihalmeanu, A. Ross, and R. Derakhshani, Enhancement and regis-
tration schemes for matching conjunctival vasculature, in Proc. 3rd Int.
Conf. Adv. Biometrics, 2009, vol. 1568121, pp. 12401249.
[42] [Online]. Available: http://en.wikipedia.org/wiki/Conjunctiva
[43] E. M. Arvacheh and H. R. Tizhoosh, Iris segmentation: Detecting pupil,
limbus and eyelids, in Proc. IEEE Int. Conf. Image Process., 2006,
pp. 24532456.
[44] J. Daugman, New methods in iris recognition, IEEE Trans. Syst., Man,
Cybern. B, Cybern., vol. 37, no. 5, pp. 11671175, Oct. 2007.
[45] G. Xu, Z. Zhang, and Y. Ma, Automatic iris segmentation based on local
areas, in Proc. ICPR, 2006, pp. 505508.
[46] P. Li and X. Liu, An Incremental Method for Accurate iris Segmenta-
tion, in Proc. ICPR, 2008, pp. 14.
[47] C. Belcher and Y. Du, Region-based SIFT approach to iris recognition,
Opt. Lasers Eng., vol. 47, no. 1, pp. 139147, Jan. 2009.
[48] W. W. Boles and B. Boashash, A human identication technique using
images of the iris and wavelet transform, IEEE Trans. Signal Process.,
vol. 46, no. 4, pp. 11851188, Apr. 1998.
[49] J. Daugman, High condence visual recognition of persons by a test of
statistical independence, IEEE Trans. Pattern Anal. Mach. Intell., vol. 15,
no. 11, pp. 11481161, Nov. 1993.
[50] Y. Du and E. Arslanturk, Video based non-cooperative iris segmenta-
tion, Proc. SPIE, vol. 6982, p. 698 20Q, 2008.
[51] L. Ma, T. Tan, Y. Wang, and D. Zhang, Efcient iris recognition by
characterizing key local variations, IEEE Trans. Image Process., vol. 13,
no. 6, pp. 739750, Jun. 2004.
[52] X. Liu, K. W. Bowyer, and P. J. Flynn, Experiments with an improved
iris segmentation algorithm, in Proc. AutoID, 2005, pp. 118123.
[53] Z. Luo and T. Lin, Detection of non-iris region in the iris recognition,
in Proc. ISCSCT, 2008, pp. 4548.
[54] J. R. Matey, R. Broussard, and L. Kennell, Iris image segmentation and
sub-optimal images, Image Vis. Comput., vol. 28, no. 2, pp. 215222,
Feb. 2010.
[55] J. R. Matey, O. Naroditsky, K. Hanna, R. Kolczynski, D. J. LoIacono,
S. Mangru, M. Tinker, T. M. Zappia, and W. Y. Zhao, Iris on the move:
Acquisition of images for iris recognition in less constrained environ-
ments, Proc. IEEE, vol. 94, no. 11, pp. 19361947, Nov. 2006.
[56] Q.-C. Tian, Q. Pan, Y.-M. Cheng, and Q.-X. Gao, Fast algorithm and
application of Hough transform in iris segmentation, in Proc. Int. Conf.
Mach. Learn. Cybern., 2004, vol. 7, pp. 39773980.
[57] T. Tan, Z. He, and Z. Sun, Efcient and robust segmentation of noisy iris
images for non-cooperative iris recognition, Image Vis. Comput., vol. 28,
no. 2, pp. 223230, Feb. 2010.
[58] M. Vatsa, R. Singh, and P. Gupta, Comparison of iris recogni-
tion algorithms, in Proc. Int. Conf. Intell. Sens. Inf. Process., 2004,
pp. 354358.
[59] R. P. Wildes, Iris recognition: An emerging biometric technology, Proc.
IEEE, vol. 85, no. 9, pp. 13481363, Sep. 1997.
[60] Z. He, T. Tan, Z. Sun, and X. Qiu, Toward accurate and fast iris seg-
mentation for iris biometrics, IEEE Trans. Pattern Anal. Mach. Intell.,
vol. 31, no. 9, pp. 16701684, Sep. 2009.
[61] Z. Zhou, Y. Du, and C. Belcher, Transforming traditional iris recogni-
tion systems to work in nonideal situations, IEEE Trans. Ind. Electron.,
vol. 56, no. 8, pp. 32033213, Aug. 2009.
[62] C. Belcher and Y. Du, A selective feature information approach for iris
image-quality measure, IEEE Trans. Inf. Forensics Security, vol. 3, no. 3,
pp. 572577, Sep. 2008.
[63] H. Proenca and L. A. Alexandre, Iris segmentation methodology for non-
cooperative recognition, Proc. Inst. Elect. Eng.Vision, Image Signal
Process., vol. 153, no. 2, pp. 199205, Apr. 2006.
[64] H. Proenca, Iris recognition: On the segmentation of degraded images
acquired in the visible wavelength, IEEE Trans. Pattern Anal. Mach.
Intell., vol. 32, no. 8, pp. 15021516, Aug. 2010.
[65] H. Proenca and L. A. Alexandre, Toward noncooperative iris recognition:
A classication approach using multiple signatures, IEEE Trans. Pattern
Anal. Mach. Intell., vol. 29, no. 4, pp. 607612, Apr. 2007.
[66] N. Otsu, A threshold selection method from gray-level histograms,
Automatica, vol. 11, pp. 285296, 1975.
[67] C. Chang, Y. Du, J. Wang, S.-M. Guo, and P. D. Thouin, Survey and
comparative analysis of entropy and relative entropy thresholding tech-
niques, Proc. Inst. Elect. Eng.Vision, Image Signal Process., vol. 153,
no. 6, pp. 837850, Dec. 2006.
[68] Y. Du, C. Chang, and P. Thouin, Unsupervised approach to color video
thresholding, Opt. Eng., vol. 43, no. 2, pp. 282289, Feb. 2004.
[69] Y. Du, C. Chang, and P. Thouin, Automated system for text detection in
individual video images, J. Electron. Imaging, vol. 12, no. 3, pp. 410
422, Jul. 2003.
[70] G. O. Williams, Iris recognition technology, IEEE Aerosp. Electron.
Syst. Mag., vol. 12, no. 4, pp. 2329, Apr. 1997.
[71] N. Van Huan and H. Kim, A novel circle detection method for iris
segmentation, in Proc. CISP, 2008, pp. 620624.
[72] M. Abdullah-Al-Wadud and O. Chae, Skin segmentation using
color distance map and water-ow property, in Proc. 4th ISIAS, 2008,
pp. 8388.
[73] M. Abdullah-Al-Wadud and O. Chae, Region-of-interest selection for
skin detection based applications, in Proc. Int. Conf. Convergence Inf.
Technol., 2007, pp. 19992004.
[74] M. S. Bartlett, J. R. Movellan, and T. J. Sejnowski, Face recognition
by independent component analysis, IEEE Trans. Neural Netw., vol. 13,
no. 6, pp. 14501464, Nov. 2002.
[75] J. Daugman, Two-dimensional spectral analysis of cortical receptive eld
proles, Vis. Res., vol. 20, no. 10, pp. 847856, 1980.
[76] M. A. Fischler and R. C. Bolles, Random sample consensus:
A paradigm for model tting with applications to image analysis and
automated cartography, Commun. ACM, vol. 24, no. 6, pp. 381395,
Jun. 1981.
[77] J. Swets and R. Pickett, Evaluation of Diagnostic Systems: Methods From
Signal Detection Theory. New York: Academic, 1982.
[78] H. Proena and L. A. Alexandre, UBIRIS: A noisy iris image database,
in Proc. 13th Int. Conf. Image Anal. Process., vol. 3617, LNCS, 2005,
pp. 970977.
[79] L. Masek and P. Kovesi, Matlab Source Code for a Biometric Identi-
cation System Based on Iris Patterns. Perth, Australia: School Comput.
Sci. Softw. Eng., Univ. Western Australia, 2003.
[80] N. L. Thomas, Y. Du, and Z. Zhou, A new approach for sclera vein
recognition, Proc. SPIE, vol. 7708, p. 770 805, 2010.
ZHOU et al.: SCLERA RECOGNITION 583
Zhi Zhou (S08) received the B.S. degree in
electrical engineering from Beijing University of
Technology, Beijing, China, in 2005 and the M.S.
degree in electrical and computer engineering from
Indiana UniversityPurdue University Indianapolis,
Indianapolis, in 2008, where he is currently working
toward the Ph.D. degree in the Department of Elec-
trical and Computer Engineering.
His research interests include image processing,
biometrics, and pattern recognition.
Mr. Zhou was a Starr Fellow in 2007. He was the
recipient of the Best Paper Award from the IEEE Workshop on Computational
Intelligence in Biometrics: Theory, Algorithms, and Applications in 2009.
Eliza Yingzi Du (SM08) received the B.S. and M.S.
degrees in electrical engineering from Beijing Uni-
versity of Posts and Telecommunications, Beijing,
China, in 1996 and 1999, respectively, and the Ph.D.
degree in electrical engineering from the University
of Maryland, Baltimore, in 2003.
She is currently an Associate Professor with the
Department of Electrical and Computer Engineering,
Indiana UniversityPurdue University Indianapolis
(IUPUI), Indianapolis. From September 2003 to July
2005, she was an Assistant Research Professor with
the Electrical Engineering Department, U.S. Naval Academy. Her research
interests include image processing, pattern recognition, and biometrics. Her
research has been funded by the Ofce of Naval Research (ONR), National
Institute of Justice, Department of Defense, National Science Foundation,
Canada Border Services Agency, Indiana Department of Transportation, and
several industry and IUPUI internal grants.
Dr. Du is a member of the honor societies Tau Beta Pi and Phi Kappa Phi.
She was the recipient of an ONR Young Investigator Award in 2007, the Indiana
University Trustee Teaching Award in 2009, the Supervisor of the Year Award
from IUPUI in 2009, and the Best Paper Award with her students from the IEEE
Workshop on Computational Intelligence in Biometrics: Theory, Algorithms,
and Applications in 2009.
N. Luke Thomas received the B.S. degree in elec-
trical engineering and M.S. degree in electrical
and computer engineering from Indiana University
Purdue University Indianapolis, Indianapolis, in 2010.
He is currently employed in the industry as a
Software Engineer for safety critical engine control
systems. His research interests include algorithm
development, biometrics, and pattern recognition.
Edward J. Delp (S70M79SM86F97) was
born in Cincinnati, OH. He received the B.S.E.E.
(cum laude) and the M.S. degrees from the Univer-
sity of Cincinnati, Cincinnati, and the Ph.D. degree
from Purdue University, West Lafayette, IN. In May
2002, he received an Honorary Doctor of Technol-
ogy degree from Tampere University of Technology,
Tampere, Finland.
From 1980 to 1984, he was with the Department
of Electrical and Computer Engineering, The Uni-
versity of Michigan, Ann Arbor. Since August 1984,
he has been with the School of Electrical and Computer Engineering and the
School of Biomedical Engineering, Purdue University. From 2002 to 2008,
he was a chaired Professor and held the title The Silicon Valley Professor of
Electrical and Computer Engineering and Professor of Biomedical Engineer-
ing. In 2008, he was named a Distinguished Professor and is currently The
Charles William Harrison Distinguished Professor of Electrical and Computer
Engineering and Professor of Biomedical Engineering. In 2007, he received a
Distinguished Professor appointment from the Academy of Finland as part of
the Finland Distinguished Professor Program (FiDiPro). This appointment is
at the Tampere International Center for Signal Processing, Tampere University
of Technology. His research interests include image and video compression,
multimedia security, medical imaging, multimedia systems, communication,
and information theory.
Dr. Delp is a Fellow of the SPIE, the Society for Imaging Science and
Technology (IS&T), and the American Institute of Medical and Biological
Engineering. He was the recipient of the Honeywell Award in 1990, the
D. D. Ewing Award in 1992, and the Wilfred Hesselberth Award in 2004, all
for excellence in teaching. In 2001, he was the recipient of the Raymond C.
Bowman Award for fostering education in imaging science from the IS&T.
In 2004, he was the recipient of the Technical Achievement Award from the
IEEE Signal Processing Society for his work in image and video compression
and multimedia security and the recipient of the Technical Achievement Award
from the IEEE Signal Processing Society. In 2002 and 2006, he was awarded
Nokia Fellowships for his work in video processing and multimedia security.
In 2008, he was the recipient of the Society Award from the IEEE Signal
Processing Society (SPS). This is the highest award given by SPS, and it cited
his work in multimedia security and image and video compression. In 2009,
he was the recipient of the Purdue College of Engineering Faculty Excellence
Award for Research.

Vous aimerez peut-être aussi