Vous êtes sur la page 1sur 15

Optical Engineering 483, 037202 March 2009

Nonintrusive iris image acquisition system based on a pan-tilt-zoom camera and light stripe projection
Soweon Yoon Ho Gi Jung Yonsei University School of Electrical and Electronic Engineering 134 Shinchon-dong, Seodaemun-gu Seoul 120-749 Korea Abstract. Although iris recognition is one of the most accurate biometric technologies, it has not yet been widely used in practical applications. This is mainly due to user inconvenience during the image acquisition phase. Specically, users try to adjust their eye position within small capture volume at a close distance from the system. To overcome these problems, we propose a novel iris image acquisition system that provides users with unconstrained environments: a large operating range, enabling movement from standing posture, and capturing good-quality iris images in an acceptable time. The proposed system has the following three contributions compared with previous works: 1 the capture volume is signicantly increased by using a pan-tilt-zoom PTZ camera guided by a light stripe projection, 2 the iris location in the large capture volume is found fast due to 1-D vertical face searching from the users horizontal position obtained by the light stripe projection, and 3 zooming and focusing on the users irises at a distance are accurate and fast using the estimated 3-D position of a face by the light stripe projection and the PTZ camera. Experimental results show that the proposed system can capture good-quality iris images in 2.479 s on average at a distance of 1.5 to 3 m, while allowing a limited amount of movement by the user.

Kang Ryoung Park Dongguk University Biometrics Engineering Research Center Department of Electronics Engineering 26, Pil-dong 3-ga, Jung-gu Seoul 100-715 Korea

Jaihie Kim Yonsei University School of Electrical and Electronic Engineering 134 Shinchon-dong, Seodaemun-gu Seoul 120-749 Korea E-mail: jhkim@yonsei.ac.kr

2009 Society of Photo-Optical Instrumentation Engineers. DOI: 10.1117/1.3095905

Subject terms: iris image acquisition; pan tilt zoom camera; light stripe projection. Paper 080693R received Sep. 3, 2008; revised manuscript received Jan. 6, 2009; accepted for publication Jan. 15, 2009; published online Mar. 10, 2009.

Introduction

Biometrics is a method for automatic individual identication using a physiological or behavioral characteristic.1 The value of biometric systems can be measured with ve characteristics: robustness, distinctiveness, availability, accessibility, and acceptability.1 Robustness refers to the fact that individual biometric features do not change over time and they can be used repeatedly. Distinctiveness refers to the fact that each individual has different characteristics of the features with great variation. Availability means the fact that all people ideally have certain biometric features in multiples. Accessibility refers to how easy the acquisition of biometric feature is, and acceptability refers to whether people regard the capturing of their biometric features as nonintrusive. In terms of the above characteristics, iris recognition is a powerful biometric technology for user authentication because it offers high levels of robustness, availability, and distinctiveness. For robustness, it has been proven that iris structures remain unchanged with age.2 A persons irises generally mature during rst 2 years of age, and then healthy irises vary little for the rest of that persons life.2 For availability, every person has an iris with complex patterns formed by multilayered structures.2 Also, each individual has two distinguishable left and right iris patterns.
0091-3286/2009/$25.00 2009 SPIE

The distinctiveness of iris is shown by its unique and abundant phase structures. According to 2 million iris comparisons,3 binary code extracted from an iris image showed 244 independent degrees of freedom. This implies that the probability of two different irises agreeing by chance in more than 70% of their phase sequences is about 1 in 7 billion.4 The level of accessibility and acceptability of iris recognition, however, is lower than that of other biometric features such as the face, the ngerprint, or the gait recognition. This is mainly due to the fact that it is difcult to acquire iris images. In terms of accessibility, iris image acquisition is not simple; conventional iris recognition systems usually require a well-trained operator, a cooperative user, adjusted equipment, and well-controlled lighting conditions.5 Lack of any of these factors will lead to user inconvenience as well as poor quality iris capture. According to a report6 on participants experience of using various biometric authentication systems at an airport in 2005, common complaints about iris recognition systems were about positioning problems and the amount of time taken. Conventional iris recognition systems such as IrisAccess3000 Ref. 7 and BMET300 Ref. 8 generally require high user cooperation during iris image acquisition. Users try to adjust their eye position to place their eyes in an acceptable position to provide the iris recognition system with an in-focus iris image. This positioning problem comes from the fact that the capture volume of the convenMarch 2009/Vol. 483

Optical Engineering

037202-1

Downloaded From: http://spiedigitallibrary.org/ on 08/27/2012 Terms of Use: http://spiedl.org/terms

Yoon et al.: Nonintrusive iris image acquisition system

tional systems is small. The capture volume refers to the volume within which an eye must be placed for the system to acquire useful iris images.9 Once the iris of the user is placed in the capture volume, users should stay in that position without any motion until the system acquires a goodquality image. Since the capture volume is usually formed at a close distance from the camera, users face the system closely. The positioning takes a lot of time for users, and it is likely to fail on untrained users who are relatively unfamiliar with the system. Some children and disabled users often nd it difcult to follow the given instructions. Toward convenient iris recognition systems for users and civil applications such as immigration procedures at airports which target generally untrained users, two types of new iris image acquisition systems have been proposed. One is a portal system and the other is based on a pan-tiltzoom PTZ camera. The portal system, which is called Iris-on-the-Move IOM suggested by Sarnoff Corporation,9 enables the capture of iris images while users walk through an open portal. IOM has a throughput up to 20 persons / min when the users pass through the portal with a normal walking pace of 1 m / s. However, a position constraint remains because its capture volume is as small as conventional ones: 20 20 10 cm width height depth. Therefore, iris image acquisition fails if a users irises do not pass through the small capture volume. In addition, the capture volume can not fully cover the height variations of users; children or very tall users may not be permissible. They suggest a modular component to expand the height of the capture volume; two cameras stacked vertically expand it by approximately 37 cm, and four cameras expand it up to 70 cm. However, the stack of multiple highresolution cameras would increase the costs proportional to the number of cameras. A PTZ camera can increase the capture volume greatly. Panning and zooming cover various position of users, and tilting covers height variation of the user. Early attempts using a PTZ function are reported by Oki IrisPass-M Ref. 10, Sensar R1 Ref. 11, and Mitsubishi Corporation.12 They are based on a wide-angle camera or a stereo vision system for locating the eye, and a narrow-angle camera for capturing the iris image. For fast control of the PTZ camera, reconstructing the 3-D position of the iris is essential. First, 3-D coordinates can determine panning and tilting angle as well as zoom factor. Second, depth information between the iris and the camera from the 3-D coordinates plays an important role to narrow the search range for optimal focus lens position. The system from the Mitsubishi Corporation uses a single wide-angle camera to detect a face, which leads to adaptive panning and tilting and estimates depth by disparity among facial features, which obviously takes a lot of time to get clear iris images. Sensar R1 uses stereo matching for 3-D reconstruction. However, in stereo matching it is complicated and takes a long time to detect the corresponding points between a pair of images. The accuracy of the depth estimation can be degraded if users are far from the camera due to errors in the feature point extraction. To increase the accuracy of depth estimation of irises at a distance, the disparity of stereo cameras should be large, and this will increase the system size. Recently, Retica Eagle-Eyes,13 Sarnoff IOM DriveThrough system,14 and AOptix system15 have been introOptical Engineering

duced as PTZ-based systems. Eagle-Eyes proposed the iris recognition system with large capture volume 3 2 3 m and a long standoff 3 to 6 m. However, because of its system complexity, which consists of four cameras scene camera, face camera, and left and right iris camerasthe cost and size of the system would be high. In addition, the capture time of the system is 6.1 s on average for a stationary subject, which is long compared to previous systems, and users may feel an intrusiveness during image acquisition. The organization and specication of other systems are still unknown. In this paper, we propose a novel iris image acquisition system based on a PTZ camera guided by a light stripe projection. A telephoto zoom lens with a pan-tilt unit expands the capture volume greatly: 120 deg width 1 m height 1.5 m depth. Thus, users do not need to make an effort to adjust their position. Due to the PTZ ability, just one high-resolution camera is required to cover the whole capture volume. For a fast PTZ control, which is necessary to realize in a practical application scenario, we propose a 3-D estimation method for the face based on a light stripe projection. This contributes to fast face search and determination of proper zoom and focus lens position. Since the light stripe projection gives the horizontal position of a user in a real time, the pan angle is always determined immediately and the users face can be found by searching a 1-D vertical line rather than searching a 2-D area of the whole capture volume. Once the face is detected, the depth between the face and the PTZ camera is calculated with a high accuracy and it gives the initial zoom and focus lens position based on relationships among distance, zoom lens position, and focus lens position under xed magnication. We assumed minimally constrained user cooperation: standing naturally in the capture volume and staring at the PTZ camera for 1 to 2 s during autofocusing. Under this assumption, we examined the feasibility of the proposed system in practical situations. The proposed system has the following three contributions compared with previous works: 1 the capture volume is greatly increased by using a PTZ camera guided by a light stripe projection, 2 the PTZ camera can track a users face easily in the large capture volume based on 1-D vertical face searching from the users horizontal position obtained by the light stripe projection, and 3 zooming and focusing on the users irises at a distance are accurate and fast using the estimated 3-D position of a face by the light stripe projection and the PTZ camera. This paper realizes the PTZ-based iris image acquisition system, which is the most popular approach to the next generation of iris recognition system and gives technical descriptions, so that it can be a helpful reference for researchers in iris recognition eld. The rest of this paper is organized as follows. Section 2 describes the overall procedure of the proposed system and outlines some design issues in terms of acceptability and accessibility. Section 3 presents a method of 3-D face coordinate determination based on a light stripe projection. Section 4 describes zooming and focusing methods for the PTZ camera to get useful iris images based on the estimated depth in Sec. 3. Section 5 gives experimental results on the feasibility of the proposed system, its availability for
March 2009/Vol. 483

037202-2

Downloaded From: http://spiedigitallibrary.org/ on 08/27/2012 Terms of Use: http://spiedl.org/terms

Yoon et al.: Nonintrusive iris image acquisition system

Fig. 1 Large capture volume of the proposed system.

recognition of the iris images captured by the system, and the accuracy and time required in practical application scenario. Finally, Sec. 6 provides conclusions. 2 System Overview The proposed system aims to acquire useful iris images under an unconstrained user environment at a distance. The unconstrained user environment means the following three features. First, the large capture volume, as shown in Fig. 1, is created by a PTZ camera, which resolves positioning problem. Second, both iris images of a user are obtained even when the user makes a small movement by a highresolution image sensor incorporated in the PTZ camera. Third, processing time is made acceptable for users by using the light stripe projection, which estimates the users position in real time. Figure 2a shows the system conguration, which consists of a PTZ camera with a highresolution image sensor, a wide-angle camera for detecting light stripes, a light plane projector, and near-IR NIR illuminators for imaging rich texture of irises. To control the PTZ camera accurately and quickly to capture a users iris images in the large capture volume, a 3-D face coordinate estimation method based on the light stripe projection can determine initial values for panning, tilting, zooming, and focusing. Thus, it helps narrow the ranges for nding the optimal values of PTZ control. Figure 2b presents a ow chart for the iris image acquisition procedure of the proposed system. Light stripe projection gives the horizontal position of a user in real time using light stripes on the users leg and the horizontal position directly determines pan angle. Thus, the PTZ camera can turn toward the user and track the user when the user is in motion. The users face is found on the 1-D vertical line normal to the ground while the PTZ camera tilts upward. Once the face is detected, the distance between the PTZ camera and the face is calculated from the estimated 3-D face coordinate. Using preestimated relationships among distance, zoom lens position, and focus lens position with a
Optical Engineering

Fig. 2 System overview: a system conguration, and b owchart.

xed magnication, the initial zoom and focus lens positions are determined. Due to the high accuracy of the initial position of each lens, only a small amount of focus renement is required to get in-focus iris images. Since the height of the user is xed after the 3-D face coordinate is determined, the face can be tracked using newly updated horizontal position and the height. Each part of the proposed system is designed to maximize user convenience, be economical, and work feasibly in practical applications. 2.1 PTZ Camera One part of our proposed system is the PTZ camera set, which consists of a pan-tilt unit, a telephoto zoom lens, and a high-resolution image sensor. Ranges for panning, tilting, and zooming should cover the entire target capture volume. Pan and tilt ranges of the pan-tilt unit are 360 and 90 deg, respectively, which are sufcient for our target capture volume. Also, the speed of the pan-tilt unit is fast enough to track a walking user; the pan and tilt speeds are 64 and 43 deg / s, respectively. The telephoto zoom lens should cover the depth range of the capture volume and have the desired standoff. The lens should zoom in the irises of users who are in the target
March 2009/Vol. 483

037202-3

Downloaded From: http://spiedigitallibrary.org/ on 08/27/2012 Terms of Use: http://spiedl.org/terms

Yoon et al.: Nonintrusive iris image acquisition system

depth range of the capture volume is 1.5 to 3 m so that the images have the enough resolution for iris recognition. According to iris image quality standards,16 the diameter of iris images must be greater than 150 pixels to be considered as at least medium quality. Based on the fact that the diameter of the iris diris is 1 cm and that of the image of the iris dimage is 150 pixels, and the magnication M is 0.111 from Eq. 1 when a cell size of the image sensor is 7.4 7.4 m. Then, the required focal length can be estimated using Eq. 2. M= d dimage = , D diris 1

1 1 1 = + , f D d

2
Fig. 3 Detection of light stripes on the given users leg: a background image with light stripes, b detected background light stripes in the ROI, c a new wide-angle camera image with the user, and d the light stripes on the users leg detected by CC-based background subtraction.

where f represents the focal length, d represents the imageto-lens distance, and D represents the user-to-lens distance. In the proposed system, zoom lenses with focal lengths varying from 149.865 to 299.730 mm are generally required. The telephoto zoom lens used here has a focal length17 of 70 to 300 mm, which guarantees that the resolution of the iris images is at least 150 pixels in diameter in the target capture volume. In addition, the standoff is dened by the closest focusing distance of the zoom lens, which means that the lens can not focus on objects at the distance closer than the closest focusing distance and is a physical lens characteristic. According to the closest focusing distance of the lens, the standoff is 1.5 m. A high-resolution image sensor of the PTZ camera should capture useful iris images with enough resolution as well as at a distance easily. Most iris image acquisition systems at a distance use a strategy of capturing a full-face image by a high-resolution camera instead of capturing just an iris image. One advantage of this strategy is that both iris images can be obtained from a given high-resolution face image, which shows better performance for iris recognition than one-iris matching. Another advantage is that at least an iris remains in the captured image even when users move slightly. To get a full-face image guaranteeing that the diameter of each iris image is 150 pixels, the image resolution on each side must be at least 1950 pixels if the width of a given face is around 15 cm and the diameter of the iris is around 1 cm. The resolution of the highresolution camera in the proposed system is18 4 megapixels 2048 2048 pixels. NIR illuminators radiating light in the 700- to 900-mm band are necessary because even dark brown irises reveal rich textures.19 However, high-power illuminators are required to obtain useful iris images for recognition at a distance because the large f -number of the zoom lens reduces the light energy incident to the image sensor. The f -number refers to the ratio of focal length to the effective aperture diameter.20 In this case, the large f -number is caused by the long focal length and small effective aperture of the zoom lens. The long focal length of the zoom lens is required when we zoom in on an object from a distance. The size of the effective aperture shrinks to hold the large depth of eld, which is necessary for robust focusing. In general, the power of an NIR illuminator must be selected to maximize
Optical Engineering

the trade-off between obtaining sufciently bright images and guaranteeing eye safety. The overall intensity variation of captured images according to changing zoom factor is compensated by adjusting camera gain and shutter speed based on the distance between the camera and the user. 2.2 Light Stripe Projection Another part of the proposed system is the implementation of a light stripe projection. It consists of a light plane projector and a wide-angle camera. The projected light plane should cover the horizontal range of the capture volume, which is 120 deg in width and 1.5 m in depth. The light plane projector generates the NIR light plane with a wavelength of 808 nm, which is invisible to the human eye. The angle of the light plane is 120 deg and is set up horizontally at a height of around 20 cm to illuminate the given users leg, as shown in Fig. 3. The intersection of the light plane with an object surface is visible as a light stripe in the image.21 The wide-angle camera detects the light stripes on the users leg. The eld of view FOV of the wide-angle camera is coincident to the angle of the light plane to observe the whole light plane area. A visible cut lter is attached to the wide-angle camera to block visible light from other light sources such as indoor illuminators and sunlight. 3 Estimation of 3-D Face Coordinates Estimating the 3-D coordinates of a given users face consists of three phases: light stripe detection, horizontal position estimation, and vertical position estimation. Light stripe projection provides the horizontal position, which determines the panning angle directly. It enables the PTZ camera to track the user horizontally until the user stops for iris recognition. Then, the face is found while the PTZ camera tilts along a 1-D line normal to the ground. Based on the horizontal position of the user and the tilt angle where
March 2009/Vol. 483

037202-4

Downloaded From: http://spiedigitallibrary.org/ on 08/27/2012 Terms of Use: http://spiedl.org/terms

Yoon et al.: Nonintrusive iris image acquisition system

the face appears in the center of the image, the 3-D coordinates of the face are determined in the PTZ camera coordinates. 3.1 Light Stripe Detection Light stripe projection is a 3-D reconstruction technique that is based on structured lighting. By projecting a light plane into an object scene, the 3-D coordinates of image points on the light stripes can be recovered from a single image.21 In general, light stripe projection is implemented by the following three steps. The rst step is detecting entire light stripes in wide-angle camera images. These light stripes include both those on the background objects and those on a given users leg as shown in Figs. 3a and 3c. The second step is distinguishing the light stripes on the users leg and transforming the center point of those light stripes into an undistorted image coordinate. The third step is reconstructing the 3-D coordinates of the center point in the wide-angle camera coordinate system. Light stripes in a wide-angle camera image are detected by convolving each image column with the 1-D Laplacian of Gaussian LoG mask. This is based on the assumption that light stripes appear at one point on each image column because the light plane is scattered horizontally. A point of a column is regarded as light stripe if the point has the maximum LoG response in the column and the response is higher than the given threshold. Figure 3b presents the light stripes detected from Fig. 3a within the region of interest ROI, which corresponds to the horizontal region of the capture volume. Among the detected light stripes, those on the given users leg are extracted by connected-component CC-based background subtraction, which eliminates the light stripes on background objects. We assume that the background light stripe image is obtained in advance. If the light stripe points in the adjacent columns are neighbors, they are regarded as the CC. The CC-based background subtraction process removes the CCs in a new input image that overlap partially or totally with the background CC in the same location. Consequently, the light stripe remaining in the image is considered as the light stripe on a coming users leg. Figure 3d shows the detected users light stripes from a new input image Fig. 3c using CC-based background subtraction. The CC-based background subtraction is more robust than pixel-based background subtraction, which can result in strong errors even if the background or camera congurations change slightly. The center point of the light stripes on the users legs is used to estimate that users horizontal position. To compensate radial distortion on the wide-angle camera, the coordinates of the center point are rectied by the radial distortion renement method addressed in Ref. 22. In this case, the rectication process is done fast since the light stripes are rst detected in a raw image with radial distortion and only a single pointthe center point of the light stripes on the users legsis then transformed into an undistorted coordinate. 3.2 Horizontal Position Estimation The key idea of the light stripe projection technique for 3-D reconstruction is to intersect the projection ray of the examined image point with the light plane.21 In Fig. 4a, the
Optical Engineering

Fig. 4 Reconstruction of the 3-D coordinates of the light stripe on the users leg and its transformation to the PTZ camera coordinate system:24 a general light stripe projection geometry, in this case, = 0; and b coordinate transformation from the wide-angle camera coordinate system to the PTZ camera coordinate system.

reconstructed ray passing through both the image point px , y and the origin of the wide-angle camera coordinate system meets with the light plane at a certain point PXwide , Y wide , Zwide. The 3-D coordinates of the intersection are obtained23 by Eq. 3:

Xwide = Y wide = Zwide =

xb tan cos f tan x sin + y cos yb tan cos f tan x sin + y cos fb tan cos f tan x sin + y cos

where represents the angle between the light plane and the Y wide axis, represents the angle between the light plane and the Xwide axis, b represents the baseline, f represents the focal length of the wide-angle camera, and x , y represents the rectied center point of the light stripe. The 3-D coordinates corresponding to x , y are reconstructed directly if the focal length f of the camera and the geometric parameters between the camera and the light plane, , b, and are known. We assumed that = 0 since the light plane is set up parallel to the ground. Then, Eq. 3 is reduced to
March 2009/Vol. 483

037202-5

Downloaded From: http://spiedigitallibrary.org/ on 08/27/2012 Terms of Use: http://spiedl.org/terms

Yoon et al.: Nonintrusive iris image acquisition system

pan = tan1

XPTZ . ZPTZ

Fig. 5 Estimation of panning angle, tilting angle, and distance between the PTZ camera and the face: a 3-D coordinate estimation of the users face and b 1-D face detection during stepwise tilting.24

The remaining parameters, and b, are obtained by the least-square estimation using the last equation of Eq. 4 by collected data of the distance to an object Zwide and y coordinate of its light stripe.24 Then, the reconstructed 3-D coordinates of a scene point on the light stripe, PXwide , Y wide , Zwide, are obtained exactly from the image of P, px , y . This implies that 3-D reconstruction based on light stripe projection is a real-time operation. The reconstructed point PXwide , Y wide , Zwide in the wide-angle camera coordinate system is transformed into the PTZ camera coordinate system. The PTZ camera coordinate system is the rigidly transformed wide-angle camera coordinate system; it is rotated by around the Xwide axis and then translated to dZwide-PTZ in the direction of the ZPTZ axis, as shown in Fig. 4b. Equation 5 shows the transformation from Xwide , Y wide , Zwide to XPTZ , Y PTZ , ZPTZ, where hPTZ represents the height of the PTZ camera from the ground:


Xwide = xb tan f y tan Y wide = Zwide = yb tan f y tan fb tan f y tan

Since the pan angle based on the horizontal position is given in real time, the PTZ camera is able to track the user horizontally. When the user stops, the face is found while the PTZ camera tilts. The tilting angle that locates the face in the center of the image is found by using coarse and ne searching procedures. In the coarse searching phase, the face is detected in a few images obtained, while the PTZ camera tilts stepwise. Stepwise tilting partitions the height of the capture volume exclusively, as shown in Fig. 5b. The angle for a tilting step and the number of steps for the stepwise tilting are determined by the FOV of the PTZ camera and the height of the capture volume so that the PTZ camera captures a different view at each tilting angle as well as covers the entire range of height variations. This is more efcient than continuous tilting, which covers duplicated views. If the face is detected at a certain stepwise tilting angle using the AdaBoost algorithm,25 panning and tilting angles are rened to place the face in the image center. The ultimate tilt angle tilt determines the distance D between the PTZ camera and the users face as follows: D= Zd cos tilt 7

where Zd = XPTZ2 + ZPTZ21/2. 4 Zoom and Focus Control The estimated distance between the PTZ camera and the users face determines the initial zoom and focus lens position so that it enables us to nd an optimal focus lens position quickly. Finally, the focus renement process gives in-focus iris images. 4.1 Initial Zooming and Focusing Given a level of magnication, the desired zoom and focus lens position are determined if the distance between the camera and the object is known. The magnication M , which is xed for iris images to have enough resolution, yields the image-to-lens distance d based on the user-tolens distance D in Eq. 1. Since d is mapped 1-to-1 to the zoom lens position Zoom, D is eventually mapped 1-to-1 to Zoom. Given D and Zoom values, the optimal focus lens position Focus, which produces in-focus image is determined. To give the initial zoom and focus lens position at an arbitrary distance of D, the functional relationships, 1 between D and Zoom and 2 between Zoom and Focus, are approximated by collected observations. By changing the distance D of a given user by 5 cm from the PTZ camera within the capture volume, the optimal zoom and focus lens position that satisfy the conditions for iris images in terms of resolution and sharpness were recorded at each distance. The optimal zoom lens position at each distance was manually adjusted so that the diameter of the iris image was 150 pixels. The optimal focus lens position was searched automatically by assessing sharpness of the iris image sequence continuously captured while the focus lens position
March 2009/Vol. 483

1 0 0 XPTZ Y PTZ = 0 cos sin ZPTZ 0 sin cos

0 Xwide hPTZ Y wide + . 5 dZwide-PTZ Zwide

The height of the light plane Y PTZ is irrelevant in this case. As a result, XPTZ , 0 , ZPTZ represents the horizontal position of the user. 3.3 Vertical Position Estimation Figure 5a illustrates the overall panning and tilting control methodology of the PTZ camera to nd the 3-D coordinates of the users face. Based on the horizontal position of the user XPTZ , 0 , ZPTZ, the panning angle pan is determined directly by
Optical Engineering

037202-6

Downloaded From: http://spiedigitallibrary.org/ on 08/27/2012 Terms of Use: http://spiedl.org/terms

Yoon et al.: Nonintrusive iris image acquisition system

Fig. 6 Calibrated initial zoom and focus lens positions. Functional relationships a between D and Zoom and b between Zoom and Focus. The dotted lines indicate measured observations and the solid lines indicate estimated relationships.24

moved around the optimal focus lens position. The focus lens position in which the image had the highest focus measure in the image sequence was chosen as the optimal focus lens position. The iris image assessment was based on the focus measure kernel introduced in Ref. 26. The observation of the optimal zoom lens position at each distance is shown in Fig. 6a as a dotted line and that of the optimal focus lens position at each zoom lens position is shown in Fig. 6b as a dotted line. A unit step of the zoom and focus lens position refers to a step size of the stepping motors, which rotate the zoom ring and the focus ring of the zoom lens. The amount of a step can be calculated by the fact that rotating the zoom and focus lens fully requires 30,000 and 47,000 steps, respectively. Based on the preceding observations, the relationship between D and Zoom is modeled as Eq. 8. Zoom is inversely proportional to D. The unknown parameters p1 and p2 are estimated using singular value decomposition SVD. Similarly, the relationship between Zoom and Focus is modeled as Eq. 9. Zoom and Focus have linear relationship. The parameters q1 and q2 are found using least-squares estimation. The tting results are shown in Figs. 6a and 6b as solid lines, respectively. Zoom = p1 p2 , D 8 9

density function pdf, which passes through a function is inversely proportional to the magnitude of the derivative of the function.27 Figure 7 compares two cases of error propagations. In D-based Zoom estimation, the uncertainty of the output Zoom is reduced, as shown in Fig. 7a, since Zoom is inversely proportional to D. On the other hand, in Zoombased D estimation, the uncertainty of D is increased, as shown in Fig. 7b. As a result, an accurately estimated D is preferred for fast focus renement. 4.2 Focus Renement The initial focus lens position estimated from D is usually not sufciently accurate because D contains errors from horizontal position estimation and tilting angle determination. Focus renement is accomplished by searching for the optimal focus lens position in the direction of maximizing the focus measure of the captured iris images. Figure 8a shows the focus measure of an iris image sequence captured while the focus lens position moves around the initial focus lens position. The ridge of the focus measure curve is regarded as the optimal focus lens position. The maximum value of the focus measure is 100. As shown in Fig. 8b, the iris image obtained at the initial focus lens position shows high value in the focus measure and the initial focus lens position is near the optimal focus lens position. For the focus measure of the iris images, the eye regions are segmented from the full-face images. One simple eye detection method is to nd the specular reections on the eyes generated by the NIR illuminators. Specular reections usually appear as bright spots with high absolute gradient values and are surrounded with low gray values. The cropped iris regions around the specular reections are convolved with the 2-D focus assessment kernel.26 The focus renement algorithm in Ref. 28 consists of two phases: the coarse and ne searching phases, as shown in Fig. 9. Let be a single step size of the focus lens position for the ne searching phase. First, the coarse searching phase roughly nds the optimal lens position with a large step size using the gradient-ascent method and narrows the search range for the following ne searching phase. In this stage, we set the step size of the coarse
March 2009/Vol. 483

Focus = q1Zoom + q2

The D-based Zoom estimation proves to be more advantageous than the Zoom-based D estimation for focus renement. That is, the former produces a narrower search range for the optimal focus lens position than the latter. It is obvious that the error in the estimated distance D is propagated to the error in the Focus determined by using the functional relationships. Clearly, minimizing the error propagation is necessary to conne the optimal focus lens position in the narrow search range for fast focus renement process. Less severe error propagation during D-based Zoom estimation can be explained by fundamental theorem, which means that the output of the probability
Optical Engineering

037202-7

Downloaded From: http://spiedigitallibrary.org/ on 08/27/2012 Terms of Use: http://spiedl.org/terms

Yoon et al.: Nonintrusive iris image acquisition system

Fig. 7 Error propagation of a D-based Zoom estimation and b Zoom-based D estimation. Initial amounts of error in a and b are the same length dotted box. Error propagation in a is less severe than that in b. The propagated amount of error in each case is illustrated as a stripped box.24

searching as 4. The focus lens moves by 4 synchronized with the frame rate and the focus of the iris region in each captured image is assessed. The direction in which the focus lens moves in the next step is determined as the best way to increase the focus measure. When the focus measure reaches its ridge, the optimal focus lens position exists in the conned range of 4. Second, in the ne searching phase, the optimal focus lens position is found by moving the focus lens precisely. The focus of the iris images is assessed while the focus lens position moves by in the range of the conned range in a direction opposite to that of the coarse searching phase. Therefore an in-focus image with a maximum focus measure is selected from the sequence. 5 Experimental Results The proposed iris image acquisition system was evaluated based on two characteristics: acceptability and accessibility. In terms of acceptability, the conditions for convenient environmentslarge capture volume, tolerance to natural

movements, and time required for iris image capturing were veried by means of a feasibility test of the iris images captured by the system and a time evaluation on various users who participated in using the system. In terms of accessibility, the accuracy of panning, tilting, zooming and focusing control of the PTZ camera guided by light stripe projection were analyzed. 5.1 Feasibility of the Proposed Unconstrained User Environments The proposed system is designed to eliminate positioning problems as well as to be tolerant of users natural movement while they are standing with natural posture. These requirements are achieved by providing the large capture volume and by capturing face images at a high resolution, respectively. The capture volume was veried by a feasibility test for the iris images acquired in the capture volume whether they were available for iris recognition. The robustness to user movements was analyzed by two factors: rst, the high-resolution camera was able to capture irises

Fig. 8 Focus measure of an iris image sequence obtained by changing the focus lens position and the initial focus lens position estimated by the proposed method. a Focus measure of the entire image sequence. The asterisk indicates the focus measure of the iris image at the initial focus lens position. b The enlarged dotted box in a Ref. 24. Optical Engineering 037202-8 March 2009/Vol. 483

Downloaded From: http://spiedigitallibrary.org/ on 08/27/2012 Terms of Use: http://spiedl.org/terms

Yoon et al.: Nonintrusive iris image acquisition system

Fig. 9 Focus renement algorithm. In the coarse renement phase, the ridge of the focus measure is found by moving the focus lens by 4. In the ne renement phase, the iris images are captured while the focus lens position moves by , and they are assessed in terms of focusing. The optimal focus lens position refers to the position at which the focus measure of the captured iris image is at the maximum value.

Fig. 10 a Hamming distance distribution of an iris image sequence of a user at 2 m with respect to focus lens position. The depth of focus at 2 m was obtained from this. b Some examples29 of iris images captured at different focus lens positions at a distance of 2 m.

under left-and-right movements, and second, the depth of eld of the PTZ camera was large enough to cover backand-forth movements. The feasibility of the captured iris images was examined by calculating the Hamming distance with the enrolled iris images of the same identity. The enrolled iris image refers to images acquired by a laboratory-developed iris acquisition camera that captures focused iris images with more than 200 pixels in diameter at the distance of 15 cm under the NIR illuminators of 750 and 850 nm, which guarantees good-quality iris images for recognition. If the Hamming distance between an enrolled image and an image captured by the proposed system is lower than a given threshold, the captured iris image is identied as genuine. In other words, the image can be regarded as feasible for iris recognition. In the experiment, the iris codes were extracted by the Gabor wavelet26 and the well-known19 threshold of Hamming distance of the algorithm is 0.32. For the feasibility test, the iris images of a user were collected by moving the position of the user in the capture volume. The depth from the PTZ camera to the user changed by 5 cm within the range of 1.4 to 3 m, which included the depth of the proposed capture volume i.e., 1.5 to 3 m. At each position, the zoom lens position was determined to make the diameter of the iris images 150 pixels. Then, the iris images were captured continuously while the focus lens position moved from 1000 steps to +1000 steps around the optimal focus lens position. Figure 10b shows several iris images that were captured while the focus lens position changed when the user was at a distance of 2 m. The focus lens positions in this range produced fully defocused iris images, in-focus iris images, and fully defocused iris images in turn. A sequence of iris images captured at each distance was compared to the enrolled iris images in terms of the Hamming distance. Figure 10a shows an example of the Hamming distance distribution of the iris image sequence with respect to the focus lens position when the user was at 2 m. In this gure, we found the available range of focus lens position that proOptical Engineering

duced iris images with a lower Hamming distance than the threshold. We called this range depth of focus. 5.1.1 Large capture volume Based on the Hamming distance evaluation results at each distance, we were able to verify the depth of the capture volume and measure the depth of eld and the depth of focus of the system. The minimum and maximum focus lens positions of the available range at each distance are marked in Fig. 11. The space between the minimum and maximum focus lens positions represents the range in which the iris image has a Hamming distance lower than the threshold. In this gure, the depth of the proposed capture volume, 1.5 to 3 m, was veried as feasible; iris images acquired in the capture volume were useful for recog-

Fig. 11 Depth of the capture volume, depth of eld, and depth of focus.29 March 2009/Vol. 483

037202-9

Downloaded From: http://spiedigitallibrary.org/ on 08/27/2012 Terms of Use: http://spiedl.org/terms

Yoon et al.: Nonintrusive iris image acquisition system

Fig. 12 a Depth of eld and b the depth of focus of the proposed system with respect to distance.29

nition because the camera was able to nd the optimal focus lens position to acquire good-quality iris images in terms of recognizability. 5.1.2 Tolerance of natural movements of users The depth of eld of the proposed system was able to cope with back-and-forth user movements when the user was standing with natural posture. The depth of eld refers to the permissible depth variations of user under xed lens conditions.29 This means that the iris images of a user captured while the user moves within the depth of eld are still available for recognition without additional focusing controls. The depth of eld at each distance can be estimated in Fig. 11. Figure 12a shows the estimated depth of eld with respect to distance. Note that the graph in Fig. 12a looks continuous because the curve-tting results of two lines in Fig. 11 were used for the evaluation of the depth of eld. The depth of eld tended to increase when the distance between the camera and the user increased. In the capture volume, the depth of eld was 5 to 9.5 cm, which covered the inevitable movements of users during the iris image acquisition phase. While the depth of eld shows system tolerance to backand-forth user movements, the strategy of capturing fullface images with the high-resolution camera instead of capturing only iris images achieves tolerance to left-and-right movements. In normal situations, both iris images are cropped from full face images. Even if the users position shifts during the process, at least one iris usually still exists in the image. However, if a fully zoomed iris image is captured in 640 480 pixels with a standard camera, it requires precise panning and tilting to capture the eye regions. Unfortunately, in general, this means that the iris can be lost from the image even if the user moves slightly. We compared the motion tolerance of capturing a full face with that of capturing an eye when the system was exposed to the natural user movements. If the user appeared in the capture volume, the proposed system captured both the iris images from a high-resolution full-face image. Then, the user kept the initial position and stood with natural posture for a minute. At the same time, the PTZ camera captured the face images every second without any panning, tilting, zooming, or focusing. This experiment was performed on 11 people and 10 times each. Figure 13a
Optical Engineering

shows the initial full-face image captured by the highresolution PTZ camera and the dotted box indicates the 640- 480-pixel region around the iris. When the user moved, the high-resolution camera still contained both irises while the 640- 480-pixel region sometimes lost the iris, as shown in Fig. 13b. The movement of the users was measured by the pixel distance between the iris center of the initial frame and that of the following frame, shown as in Fig. 13b. Figure 13c shows a histogram of d . The d were 122.86 and mean and standard deviations of d

Fig. 13 User movements measured on image when users were standing naturally: a initial frame acquired by the PTZ camera at high resolution, where the box indicates a region of 640 480 pixels; b an image from a 1-min image sequence in which is the distance bethe eye escaped the initial eye region, here d tween the iris center of the initial frame and that of the current frame; . and c the histogram of d March 2009/Vol. 483

037202-10

Downloaded From: http://spiedigitallibrary.org/ on 08/27/2012 Terms of Use: http://spiedl.org/terms

Yoon et al.: Nonintrusive iris image acquisition system

93.21 pixels, respectively. Considering that the margin was 320 pixels in width from the center of the initial 640 480-pixel region, average movements of users caused partial occlusion of eye regions, which led to a failed boundary detection of the eye and iris regions. Movements over 200 pixels occurred about 17% of the time, which meant that the iris pattern was lost from the FOV. However, the full-face images always contained eye regions during the movements. 5.1.3 Accessibility for in-focus iris images The depth of focus refers to the permissible error range of the focus lens position to obtain feasible iris images for recognition. In the experiments, the depth of focus was evaluated in the sense of iris recognition rather than in the sense of optics, since even slightly defocused iris images could be identied correctly. The depth of focus is a measure of the characteristic of accessibility since systems with large depth of focus do not require elaborate focusing algorithms. Furthermore, large depth of focus brings fast iris image acquisition because the optimal focus lens position is found using large step sizes during the ne searching process. As shown in Fig. 12b, the depth of focus of the proposed system showed variations within 500 to 2000 steps in the capture volume. This means that the control error of focusing was acceptable by at least 500 steps. Since the error of the initial focus lens position was around 1000 steps see the next section, either the iris images captured at the initial focus lens position were available if the initial focus lens position was in the depth of focus, or the optimal focus lens position could be found within the conned search range even if the initial focus lens position was out of the depth of focus. 5.2 Accuracy of PTZ Control Based on Light Stripe Projection In the proposed PTZ control method, an accurate distance estimation between the PTZ camera and the given users face is necessary to determine the initial zoom and focus lens positions, which can narrow the search range for optimal focus lens position. However, direct accuracy evaluation of the estimated distance is difcult because it is not easy to obtain the precise ground truth data of the distance between two points in a 3-D space. Instead of measuring the distance estimation accuracy directly, the accuracy of the initial focus lens position was measured by observing the error between the optimal focus lens position and initial focus lens position. If the initial focus lens position is near the optimal one, PTZ control can be regarded as accurate. 5.2.1 Error of horizontal position estimation of the user The initial focus lens position error was induced by the horizontal position estimation error and the face detection error. The error of horizontal position estimation using light stripe projection was due to limited image resolution; that is, quantization error. As shown in Fig. 14a, a single pixel in the image plane is matched with not a single point in three dimensions, but a certain area. Therefore, estimated depth by light stripe projection has uncertainty. Another feature is that image of the light stripe on a close object shows a lower level of ambiguity in depth than that of the
Optical Engineering

Fig. 14 Quantization error of depth estimation in light stripe projection: a uncertainty of depth estimation due to limited pixel resolution and b measured errors at 43 different distances solid line and quantization error bound according to the distance dotted line.

light stripe on objects further away. The dotted line in Fig. 14b represents the range of quantization error of depth estimation, which has more ambiguity in depth estimation at a further distance. To evaluate depth estimation accuracy, we compared the horizontal distance estimated by light stripe projection with the distance measured by a laser distance-meter. A plane board was used for accurate experiment, and its light stripe images were collected by changing the distance from the wide-angle camera to the plane in the capture volume. In Fig. 14b, measured errors are shown as a solid line. In the capture volume, depth estimation by light stripe projection was mostly successful within the quantization error bound and the error bounded within 2 cm. However, depth estimation errors on objects at around 1.5 m occurred beyond the quantization error bounds. This arose from the detection error of the center point of a light stripe because the light stripes at a close distance were relatively thick, and then the center point changed with variations of 1 pixel at a time. Also, the error curve formed a zigzag shape since the plane was located inside the scene depth coverage of a pixel at random. 5.2.2 Face detection error Face detection errors also affected the accuracy of the initial focus lens position. We used a well-known face detector provided by OpenCV.30 Since the face detector was trained under visible illumination, we needed to show the performance consistency of the face detector under NIR illumination. We collected 540 images from 35 users that
March 2009/Vol. 483

037202-11

Downloaded From: http://spiedigitallibrary.org/ on 08/27/2012 Terms of Use: http://spiedl.org/terms

Yoon et al.: Nonintrusive iris image acquisition system

Fig. 15 Face detection under the NIR illumination: a 10 different user positions in the widest FOV of the PTZ camera, and b examples of captured face images at each position with detection results.29

included faces of various sizes and locations under NIR illumination by specifying users position, as shown in Fig. 15a. A set of images captured at each position, is shown in Fig. 15b. Note that this experiment was designed to estimate the performance of the face detector in the given condition of illumination and various positions of various users, so the users position in Fig. 15b does not mean the capture volume. The size of the high-resolution face image was reduced by 1 / 100 because the speed of the face detector dropped severely when the image size was too large. Resizing the image also has another advantage; it eliminates false positives that occurred because of unnecessary details. The face detection rate was 98.7% in the database.29 5.2.3 Initial focus lens position error Finally, the 3-D face coordinate estimation error results in an error of the initial focus lens position. The error of initial focus lens position was measured 100 times in a users position, and it was repeated while changing the users position from the PTZ camera in the capture volume. This experiment was done with a mannequin, which avoided error due to the users movement. Figure 16 shows the error bar of the initial focus lens position estimated for 100 trials at each distance. In most cases, the mean of the initial focus lens positions was lower than 1000 steps. However, the
Optical Engineering

initial focus lens position errors at around 1.5 m were large. This comes from large depth estimation errors shown in Fig. 14b. This means that the error of horizontal position estimation propagated to the determination of the initial focus lens position. Nevertheless, most initial focus lens positions can be resilient to optimal focus lens positions during the focus renement phase.

Fig. 16 Error bar of initial focus lens position with respect to distance. The mean errors at most distances were less than 1000 steps, which could mostly be compensated quickly by focus renement, except for the errors at around 1.5 m. For each distance, the variance of errors was negligible. March 2009/Vol. 483

037202-12

Downloaded From: http://spiedigitallibrary.org/ on 08/27/2012 Terms of Use: http://spiedl.org/terms

Yoon et al.: Nonintrusive iris image acquisition system

Fig. 17 Iris images captured at a distance: a enrolled image captured by a conventional iris recognition system, and iris images of the same person captured using the proposed system at b 1.5, c 2.0, d 2.5, and e 3.0 m Ref. 29.

Figures 17b17e present iris images that were captured by the proposed system at a distance. Compared to enrolled iris images acquired by laboratory-developed iris acquisition camera Fig. 17a, they were of high quality in terms of recognizability; the Hamming distances between the enrolled image and the captured images were Fig. 17b, 0.201; Fig. 17c, 0.189; Fig. 17d, 0.240; and Fig. 17e 0.240, which were lower than the given threshold, 0.32. Figure 18 presents several examples of both iris images of various users captured by the proposed system at a distance during a real demonstration. 5.3 Time Required for Iris Image Acquisition In this section, we present the time of the entire acquisition process. We measured the time required by nine participants in a real situation, which included one experienced user, two relatively less experienced users, and six inexperienced users. The participants were instructed to stand at any position within the capture volume and stare the PTZ camera during the image acquisition process. The time for tilting, initial zooming and focusing, and focus renement was measured separately right after the user stopped. The time for panning was not recorded, because panning was done continuously while the user moved. Table 1 shows the average time evaluation for each phase. The frame rate of the PTZ camera was 8 frames / s. The average time required to obtain in-focus iris images using the proposed system was 2.479 s with Intel Core2 CPU, 2.4 GHz, which is comparable to conventional iris recognition systems. The time for tilting depends on the users height. The process includes time for tilting the PTZ camera and detecting the face. However, since only a few of the images captured during stepwise tilting were used, the time variations due to height variation were not critical. The time required for initial zooming and focusing was fairly constant because the lens positions were directly determined by the 3-D face coordinates. There were slight variations according to distance; the zoom lens rotated more and the focus lens rotated less when a user was farther away from the PTZ camera. But the time variations were not signicant. For the time required for focus renement, the proximity of the initial focus lens position to the optimal focus lens position was a critical factor. Based on the nding that the initial focus lens position is usually located around the optimal focus lens position within fewer than 1000 steps, we set a single step size for the ne searching phase as 50 steps, and a consequent step size for the coarse searching phase 4 as 200 steps. This means that the coarse searching phase took less than ve frames in most cases. Moreover, the ne searching phase used a rmly bounded number of frames within a conned range. The experimental results
Optical Engineering

Fig. 18 Examples of left and right iris images captured by the proposed system. The distances are the estimated value by light stripe projection and tilt angle estimation and show the quality of both iris images captured at various distances.

show that eight to nine frames were taken to obtain in-focus iris images during the focus renement stage. 6 Conclusions A novel iris image capturing system was proposed to improve acceptability and accessibility of iris recognition sysMarch 2009/Vol. 483

037202-13

Downloaded From: http://spiedigitallibrary.org/ on 08/27/2012 Terms of Use: http://spiedl.org/terms

Yoon et al.: Nonintrusive iris image acquisition system Table 1 Time required for each stage and average time to obtain feasible iris images unit: seconds. Initial Zooming and Focusing 0.438 Focus Renement 1.183

Tilt Average time 0.857

Total 2.479

tems. Acceptability is achieved in terms of user position, movement, and time required. A large capture volume of 120 deg width 1 m height 1.5 m depth enables users to pay less attention to positioning at a distance and makes the proposed system applicable to users of various heights. A high-resolution PTZ camera and a sufcient depth of eld result in the advantage that users can stand naturally while the iris images are captured. Both iris images are successfully cropped from full-face images captured by the high-resolution camera when the user moves slightly to the left and right. The depth of eld of the proposed system shows that it is tolerant to back-and-forth movements. It takes an average of 2.5 s to capture the infocus iris images. Accessibility is achieved by estimating the face coordinates based on real-time detection of the users horizontal position using light stripe projection and by holding enough depth of focus. The horizontal position of the user determines the pan angle exactly and helps the face be detected in a 1-D vertical line. Then the estimated distance between the PTZ camera and the face determines the initial zoom and focus lens positions with high accuracy. Since the face coordinate information reduces most parts of the PTZ control from searching and optimization problems to deterministic ones, PTZ control is performed quickly. In addition, the accuracy of the initial focus lens position contributes to fast focus renement and a sufcient depth of focus eliminates a need for an elaborate focus renement algorithm. Proposed system has the following three contributions compared with previous works: 1 the capture volume is largely increased by using a PTZ camera guided by a light stripe projection, 2 the PTZ camera can track a users face easily in the large capture volume based on 1-D vertical face searching from the users horizontal position obtained by the light stripe projection, and 3 zooming and focusing on the users irises at a distance are accurate and fast using the estimated 3-D position of a face by the light stripe projection and the PTZ camera. For further research, efcient illumination control is required. Because the combination of a high-resolution camera and a lens with a large f -number reduced the total incident energy of light, we used bulky illuminators, which emitted NIR light continuously. Instead, low-power synchronized ash illuminators can be one of the solutions. In the future, the proposed system will be applied to moving users by solving the degradation of iris image quality that can occur with motion blurring. Acknowledgments This work was supported by the Korea Science and Engineering Foundation KOSEF through the Biometrics Engineering Research Center BERC at Yonsei University.
Optical Engineering

References
1. J. Wayman, A. Jain, D. Maltoni, D. Maio, Eds., Biometric Systems: Technology, Design and Performance Evaluation, Springer, London, 2005. 2. R. P. Wildes, Iris recognition: an emerging biometric technology, Proc. IEEE 859, 13481363 1997. 3. J. Daugman, Statistical richness of visual phase information: update on recognizing persons by iris patterns, Int. J. Comput. Vis. 451, 2538 2001. 4. J. Daugman and C. Downing, Epigenetic randomness, complexity and singularity of human iris patterns, Proc. R. Soc. London, Ser. B 2681477, 17371740 2001. 5. J. L. Wayman, Fundamentals of biometric authentication technologies, Int. J. Image Graph. 11, 93113 2001. 6. Atos Origin, UK passport service biometrics enrolment trial: report, 2005; Online available at http://www.ips.gov.uk/passport/ downloads/UKPSBiometrics-Enrolment-Trial-Report.pdf accessed on Dec. 31, 2008. 7. IrisAccess 3000, LG Online available at http://www.lgiris.com/ ps/products/previousmodels.htm accessed on Dec. 31, 2008. 8. BM-ET300, Panasonic Online, available at http:// catalog2.panasonic.com/webapp/wcs/stores/servlet/ ModelDetail?displayTabO&storeId11201&catalogId 13051&itemId67115&catGroupId16817&surfModelBMET300 accessed on Dec. 31, 2008. 9. J. R. Matey, O. Naroditsky, K. Hanna, R. Kolczynski, D. J. Loiacono, S. Mangru, M. Tinker, T. M. Zappia, and W. Y. Zhao, Iris on the move: acquisition of images for iris recognition in less constrained environments, Proc. IEEE 9411, 19361947 2006. 10. IrisPass-M, OKI Online, available at http://www.oki.com/en/iris/ accessed on Dec. 31, 2008. 11. U. M. Cahn von Seelen, T. Camus, P. L. Venetianer, G. G. Zhang, M. Salganicoff, and M. Negin, Active vision as an enabling technology for user-friendly iris identication, in Proc. 2nd IEEE Workshop on Automatic Identication Advanced Technologies, pp. 169172 1999. 12. G. Guo, M. J. Jones, P. Beardsley, A System for automatic iris capturing, Mitsubishi Electric Research Laboratories, TR2005-044 2005 Online available at http://www.merl.com/publications/ TR2005-044/ accessed on Dec. 31, 2008. 13. F. Bashir, P. Casaverde, D. Usher, and M. Friedman, Eagle-Eyes: a system for iris recognition at a distance, in Proc. IEEE Conf. Proc. on Technologies for Homeland Security, pp. 426431 2008. 14. IOM Drive-Through System, Sarnoff Corporation Online, available at http://www.sarnoff.com/products/iris-on-the-move accessed on Dec. 31, 2008. 15. AOptix Online, available at http://www.aoptix.com/biometrics.html accessed on Dec. 31, 2008. 16. ANSI INCITS 379-2004: Iris Image Interchange Format. 17. AF Zoom-Nikkor 70 300 mm f/4-5.6D ED 4.3 , Nikon Online available at http://nikonimaging.com/global/products/lens/af/zoom/ af_zoom70-300mmf_4-56d/index.htm accessed on Dec. 31, 2008. 18. SVS4020, SVS-VISTEK Online available at http:// www.svsvistek.com/camera/svcam/SVCAM%20GigEVision/ svs_gige_line.php accessed on Dec. 31, 2008. 19. J. Daugman, Probing the uniqueness and randomness of iriscodes: results from 200 billion iris pair comparisons, Proc. IEEE 9411, 19271935 2006. 20. E. Hecht, Optics, 4th ed., Addison Wesley, San Francisco, CA 2002. 21. R. Klette, K. Schluns, and A. Koschan, Computer Vision: ThreeDimensional Data from Images, Springer 1998. 22. H. G. Jung, Y. H. Lee, P. J. Yoon, and J. Kim, Radial distortion renement by inverse mapping-based extrapolation, in Proc. IAPR Int. Conf. on Pattern Recognition, pp. 675678 2006. 23. H. G. Jung, P. J. Yoon, and J. Kim, Light stripe projection based parking space detection for intelligent parking assist system, in Proc. IEEE Intelligent Vehicle Symp., pp. 962968 2007. 24. S. Yoon, H. G. Jung, J. K. Suhr, and J. Kim, Non-intrusive iris image capturing system using light stripe projection and pan-tiltzoom camera, in Proc. IEEE Computer Society Workshop on Biometrics in association with CVPR07, pp. 17 June 18, 2007. 25. P. Viola and M. J. Jones, Robust real-time face detection, Int. J. March 2009/Vol. 483

037202-14

Downloaded From: http://spiedigitallibrary.org/ on 08/27/2012 Terms of Use: http://spiedl.org/terms

Yoon et al.: Nonintrusive iris image acquisition system Comput. Vis. 572, 137154 2004. 26. J. Daugman, How iris recognition works, IEEE Trans. Circuits Syst. Video Technol. 141, 2130 2004. 27. A. Papoulis and S. U. Pillai, Probability, Random Variables, and Stochastic Processes, 4th ed., McGraw-Hill, New York 2002. 28. M. Subbarao and J. Tyan, Selecting the optimal focus measure for autofocusing and depth-from-focus, IEEE Trans. Pattern Anal. Mach. Intell. 208, 864870 1998. 29. S. Yoon, K. Bae, K. R. Park, and J. Kim, Pan-tilt-zoom based iris image capturing system for unconstrained user environments at a distance, in Proc. 2nd int. Conf. on Biometrics, Lecture Notes in Computer Science, Vol. 4642, pp. 653663, Springer, Berlin 2007. 30. OpenCV Online, available at http://sourceforge.net/projects/ opencvlibrary/ accessed on Dec. 31, 2008. Soweon Yoon received her BS and MS degrees from the School of Electrical and Electronic Engineering, Yonsei University, Seoul, Korea, in 2006 and 2008, respectively. She is currently a PhD student with the Department of Computer Science and Engineering, Michigan State University. Her research interests include pattern recognition, image processing, and computer vision for biometrics. Kang Ryoung Park received his BS and MS degrees in electronic engineering from Yonsei University, Seoul, Korea, in 1994 and 1996, respectively, and his PhD degree in computer vision from the Department of Electrical and Computer Engineering, Yonsei University, in 2000. He was an assistant professor with the Division of Digital Media Technology, Sangmyung University, from March 2003 to February 2008 and since March 2008 he has been an assistant professor with the Department of Electronics Engineering, Dongguk University. He also has been a research member of BERC Biometrics Engineering Research Center. His research interests include the computer vision, image processing, and biometrics. Jaihie Kim received his BS degree in electronic engineering from Yonsei University, Seoul, Korea, in 1979 and his MS degree in data structures and his PhD degree in articial intelligence from Case Western Reserve University, Cleveland, Ohio, in 1982 and 1984, respectively. Since 1984, he has been a professor with the School of Electrical and Electronic Engineering, Yonsei University. He currently directs the Biometric Engineering Research Center in Korea. His research areas include biometrics, computer vision, and pattern recognition. Prof. Kim currently chairs the Korean Biometric Association.

Ho Gi Jung received his BS, MS, and PhD degrees in electronic engineering from the Yonsei University, Seoul, Korea, in 1995, 1997, and 2008, respectively. He has been with the Mando Corporation Global R&D H.Q. since 1997. He developed environment recognition algorithms for a lane departure warning system LDWS and adaptive cruise control ACC from 1997 to 2000. He developed an electronic control unit ECU and embedded software for a electrohydraulic braking EHB system from 2000 to 2004. Since 2004, he has developed environment recognition algorithms for an intelligent parking assist system IPAS, collision warning and avoidance, and an active pedestrian protection system APPS. His interests are automotive vision, embedded software development, driver assistant systems DASs, and active safety vehicles ASVs.

Optical Engineering

037202-15

March 2009/Vol. 483

Downloaded From: http://spiedigitallibrary.org/ on 08/27/2012 Terms of Use: http://spiedl.org/terms

Vous aimerez peut-être aussi