Vous êtes sur la page 1sur 8

EE 678: Application Assignment

Fingerprint recognition using Dual Tree Complex


wavelet transform.
Group 13 - Nachiket Deo (10d070006), Khushal Kharade (10d070023), Nitish Mital (10d070055)
Abstract:
Image-based and minutiae-based are two major methods of fingerprint recognition. In this
work, we presented an image-based fingerprint recognition method by using wavelet transformation.
The features extraction of the proposed method differing with previous wavelet methods is based on
Dual Tree Complex Wavelet Transform designed by Selesnick
[1]
operating on the blocks of normalized
region of interest(ROI). The alignment is required to build ROI including location of the reference point
and rotation alignment. The application assignment would involve generating a classifier that
recognizes a user based on an image of his fingerprints. The proposed method has been tested on a
small fingerprint database using the SVM classifier. The very high recognition rates achieved show that
the proposed method constitutes an efficient solution for a small-scale fingerprint recognition system.

INTRODUCTION
Fingerprint recognition has been extensively explored in past years. However it is still
intractable to design an effective and efficient fingerprint recognition method. Minutiae-based
and image-based are two major fingerprint recognition methods. Image based method offers
higher computation efficiency and also effective when the quality of fingerprint image is
enhanced by preprocessing. Ridges, furrows and some cross-points of them form the image
structure of fingerprint. Minutiae-based methods mainly focus on the cross-points while image-
based methods mainly focus on the global fingerprint image structure. Since minutiae based
method suffers from insensitive to quality of fingerprint image and varying number of cross-
points. Recent year many works focus on the feature of fingerprint.
The filter bank-based representation is a feature-based technique that captures both
the local and the global details in a fingerprint as a compact fixed length feature vector
(FingerCode).

Dual Tree Complex Wavelet Transform
The complex wavelet is expressed as: () = () + () ; where and are
Hilbert pairs. Wavelet Filter-bank analysis is applied to the image, first row-wise followed by
column-wise, to get the coefficients for the real and imaginary wavelet. These coefficients are
then used for feature extraction. The analysis filter bank is illustrated below.

Fig.1 Analysis Filter Bank in Dual Tree CWT

Here, h0 (n) is LPF and Hilbert pair of g0 (n), correspondingly h1 (n) is HPF and Hilbert to g1 (n). In
this application we have used filters defined by Selesnick, as defining such a filter pair is quite a
sophisticated task. Filters defined by Selesnick are shown below. 10 coefficients of LPF and HPF

Fig. 2 Filters used in the 1D wavelet decomposition

In this application, we have assumed that the rows and columns are separable and we can
independently apply 1D transform successively to rows and then columns to obtain 2D
transform of image.




Fig.3 single stage decomposition by the dual tree CWT
As Fig.3 suggests the single stage decomposition looks quite similar to 2 level real DWT
decomposition. We went on to 2 stages decomposition in this assignment.
FINGERPRINT IMAGE PREPROCESSING
Image alignment and normalization is applied before proceeding further. The finger print images are not
all oriented and positioned in the same way. So, we have to find one reference point in each image and
align them with respect to that point.
Core Point Detection and Rotation:
We define the reference point of a fingerprint as the point of maximum curvature of the
concave ridges in the fingerprint image. Earlier approaches depended on Poincare index for such
detection. However these approaches were not immune to local noise. The local orientations of
the ridges is estimated by using the Sobel operator to find the gradients. Singular points are
defined as those points discontinuous in the orientation field. We extract singular point using the
orientation field, and classify it as a core point, or a delta point.

Fig.4 The red square is the detected core point
Alignment of the fingerprint images by Rotation:



Fig.5 Original images (left) and aligned images (right)
We take sections of the images with their core points as the center. Then we rotate the images
such that the orientation above the core point is 0 in each image. This aligns all the images at a
common angle. The alignment is shown in above figures.
Normalization: After the alignment is done, the region of interest is defined as 128x128 pixels. Since, the
contrast in every image is not same, we have normalized all the images as we have used magnitude of
coefficients in feature extraction which is affected by intensity of pixels.

Fig.6 Original Image (left) and Normalized Image (right)





Feature Extraction

Fig.7 Region of interest. 64 blocks of size 16x16 each.

The DTCWT is applied on every row of each block to form a new image v. Again, DTCWT is applied on
every column of each block in the new image v.

Fig.8 row decomposed image

A 16x16 block after 2 stage dual tree wavelet decomposition looks like this. The lower right block
corresponds to low frequency i.e. LL. The magnitudes of other 8 blocks corresponding to high
frequencies i.e. details only are included while defining features.



Fig.9 Two level decomposition of a block (16x16)

After taking DTCWT, we get the wavelet coefficients: (, ) = (, ) + (, )
The magnitude information is computed using the formula:
(, ) = ((, )
2
+ (, )
2
).

Mean Energy e1
j
and standard deviation e2
j
are defined as follows:

N
2
varies with size of block. Ck
j
represents the magnitude of the complex coefficient.

Recognition
The Dataset:
The dataset consists of 5 fingerprint images of 23 different users. 4 images of each user are
used for training the classifier and one image each is used to test it. Thus the test data consists of
23 images, one for each user.
Classification:
Initially our plan was to use a Neural Network for classifying users based on the feature vector
extracted from their finger print. However, due to the small size of the dataset and the large
number of parameters in a neural network, there was a high possibility of over fitting on the
training data. Thus we opted for simpler classifiers:
(1)Nearest Neighbor (Euclidian distance) classifier and
(2)Support Vector Machines
Nearest Neighbor Classifier: All training data points belonging to the same class are averaged to
yield a centroid. Each class is represented by its respective centroid. When a test feature vector is
provided, the class whose centroid is closest to the test vector (in terms of Euclidian distance) is
declared as its class.
Support Vector Machines: Support Vector Machines are a type of discriminative classifiers.
Training an SVM to discriminate between two classes involves finding the best hyperplane in the
feature space that separates the data points of the two classes in terms of a loss function and a
regularization function. An SVM can discriminate between only 2 classes. For our 23 possible users
(i.e. 23 classes), we trained 23 SVMs in a one vs rest manner for each of the 23 users. Finally, for a
new test vector, the loss values for each of the 23 SVMs are compared and the SVM with the best
value is declared as the class.

Experimental Results
Classification accuracies obtained (number of correctly classified finger prints):
Classifier\ Feature Set Mean Energy Variance Both(concatenated)
Nearest Neighbour 78.26% (18 out of 23) 73.91% (17 out of 23) 78.26% (18 out of 23)
SVM 91.3 % (21 out of 23) 69.57% (16 out of 23) 82.61% (19 out of 23)
Table 1. Results of matching
We can see that the SVMs performed better than the nearest neighbor classifier. This may be due to the
discriminative training in SVMs. SVMs are trained so as to best distinguish between even the confusable
fingerprint classes.
Also, the mean energy based features gave better accuracies than the variance based feature vector.
Concatenation of the two actually reduced the accuracy compared to just using mean energy,
suggesting that the two do not carry complementary information.
Conclusion and future work
Conclusion: A new method of fingerprint recognition using complex wavelet based features have
been proposed. The high recognition rates achieved by our method as well as its low computational
complexity reveal that the method can be used to efficiently solve a security problem involving a small
number of fingerprint images. The problem we encountered is of very small database. We had only 5
useful images per user as core point could not be located in some while others had issue defining region
of interest of size 128x128. So, better resolution database would definitely give more accuracy.
Future work: There are few challenges which can be attempted to achieve more accuracy. The
first challenge would always be designing ones own wavelet filters which are Hilbert pairs of each other
and useful in extracting useful features. Another challenge is alignment i.e. rotating the images and bring
them to a common predefined orientation, this will improve the accuracy.
An area worth of exploration could be to use phase information of the wavelet coefficients
independently to extract relevant features and do matching with them.


Acknowledgement
The application assignment was in itself an excellent learning phase in both technical and managerial
domain. We must thank Prof. V.M. Gadre to give us this opportunity to work this topic and
experience the realization of applications of wavelets we learn in class. We appreciate the help from
Ishan Dashottar (M.Tech student, IITBombay), and Parameshwar Birajdar (PhD student, IITBombay)
at various stages of our work.

References
1. The dual-tree complex wavelet transform - A coherent framework for multiscale signal
and image processing.
I. W. Selesnick, R. G. Baraniuk, and N. Kingsbury. IEEE Signal Processing Magazine,
22(6):123-151, November 2005.
2. http://bias.csr.unibo.it/fvc2002/
3. Fingerprint recognition using wavelet domain features- by Ting Tang.
4. http://www.csie.ntu.edu.tw/~cjlin/libsvm/ LIBSVM -- A Library for Support Vector
Machines by Chih-Chung Chang and Chih-Jen Lin

Vous aimerez peut-être aussi