Vous êtes sur la page 1sur 5

INTERNATIONAL JOURNAL FOR TRENDS IN ENGINEERING & TECHNOLOGY

VOLUME 4 ISSUE 2 APRIL 2015 - ISSN: 2349 - 9303

Quality Prediction in Fingerprint Compression


T. Pavithra1
1

Kalasalingam Institute of Technology, ECE,


anu20bharathy@gmail.com

P. Anu bharathy2
Kalasalingam Institute of Technology, ECE
anu20bharathy@gmail.com

Abstract A new algorithm for fingerprint compression based on sparse representation is introduced. At first, dictionary
is constructed by sparse combination of set of fingerprint patches. Designing dictionaries can be done by either selecting
one from a prespecified set or adapting a dictionary to a set of training signals. In this paper, we use K-SVD algorithm to
construct dictionary. After computation of dictionary, the image gets quantized, filtered and encoded. The resultant image
obtained may be of three qualities: Good, Bad and Ugly (GBU problem). In this paper, we overcome the GBU problem by
prediction the quality of image.
Index Terms Compression, DCT, DWT, Fingerprint, Histogram, K-SVD, Sparse representation.

1 INTRODUCTION

ue to the uniqueness of fingerprint, it is considered to be the


most important of all biometric characteristics. Fingerprint has
been widely used in identification of persons. Due to the
advancement in technology person identification becomes
digitalized. The fingerprint is applied in crime branches like FBI,
forensic, etc. The fingerprint recognition becomes popular due to its
simplicity. It is comprised of mainly ridges and valleys. The older
technique used in fingerprint compression is based on wavelet scalar
quantization [1]. K-SVD algorithm (K means clustering) is an
iterative method that uses sparse coding for the current dictionary
and continuously updating the dictionary. The K-SVD algorithm is
compatible with many existing pursuit method [2]. Here, the training
sample is considered by having both the corrupted image and the
high quality image databases. The proposed system has the ability to
predict whether the resultant image is Good (easy to match), Bad
(average matching difficulty) and the Ugly (difficult to match) [3].

SPARSE REPRESENTATION

Representing an image in sparse is nothing but representing them


in a few points. This greatly reduces the memory required to store
them. In order to overcome the shortcomings like deformation,
rotation, translation, noise, sparse representation should be employed
[8]. The concept of sparse representation [9] is briefly explained
below.
Sparse representation is nothing but considering only the value of
few coefficients into account and others into zero. Equation (1)
represents the sparse representation of vectors as follows,

(1)
Only some of the co-efficient are considered in Fig.1.As a result,
the data vector can be represented using few points.

2 EXISTING TECHNIQUE
For general image compression, two most commonly used
transforms are i) Discrete Cosine Transform [5], ii) Discrete Wavelet
Transform [6]. DCT based algorithms are used in JPEG [7],
JPEG2000 [8] whereas DWT based algorithms are used in SPIHT
(Set Partitioning in Hierarchical Trees. Targeted at fingerprint images
commonly used are WSQ (Wavelet Scalar Quantization), CT
(Contourlet Transform) [1]. But these algorithms have a major
disadvantage i.e.) they lack the ability of learning. The proposed
method based on sparse representation has the ability to update itself.

T. Pavithra is currently pursuing bachelors degree program in electronic


and communication engineering in Kalasalingam Institute of Technology,
India, PH- +91 9994711434. E-mail: tpavithra333@gmail.com
P. Anu bharathy is currently pursuing bachelors degree program in
electronic and communication engineering in Kalasalingam Institute of
Technology, India, PH- +91 9489009651. E-mail:
anu20bharathy@gmail.com

Fig.1.Sparse representation

111

INTERNATIONAL JOURNAL FOR TRENDS IN ENGINEERING & TECHNOLOGY


VOLUME 4 ISSUE 2 APRIL 2015 - ISSN: 2349 - 9303
4

STEPS INVOLVED IN FINGERPRINT COMPRESSION

4.1 Dictionary Construction


The K-SVD algorithm is used for dictionary construction.
Initially, the training set is constructed form the fingerprint samples.
A fingerprint sample is taken and is divided into fixed square patches
using greedy algorithm and it is added to the training set [2].
At first, a new patch is added to the empty dictionary. Then, the
next patch is taken and it is compared with the previous patches, if it
is similar then it is left behind. If not, then it is added to the
dictionary. The optimization problem is solved by using equation (2)
in order to measure the similarity between the two patches.
Fig.2.Algorithm for fingerprint compression

(2)
Here, || ||

2
F

is the Frobenius norm. The corresponding

matrices of two patches are P 1 and P 2 . t is a scaling factor


which is the parameter of optimization problem.

4.2 Methods to construct the dictionary


Random method: The fingerprint samples from the training
samples are selected at random and arranged as columns of
the dictionary matrix.
Orientation method: Based on orientation, interval
[0, ,180] are divided into equal size intervals. An
orientation (mid-value of interval) is assigned to each
interval. Foreground patches of a fingerprint have an
orientation while the background patches dont. So, the
patches with same orientation are taken and are arranged into
a dictionary. For each interval, the same number of patches is
taken.
K-SVD method: By continuously solving the optimization
problem the dictionary is obtained [10]. It is given in
equation (3),
(3)
Here, A is dictionary, Y consists of training patches, X are
the coefficients, Xi is the ith column of X. The coefficient
matrix X is solved by MP (Matching Pursuit) method. SVD
(Singular Value Decomposition) is used to update the
dictionary [2].

4.4 Fingerprint compression


A new fingerprint is taken and it is divided into square patches
which are of same size as that of test patches. The size of the patch is
directly proportional to the compression efficiency. The size must be
larger in order to achieve high compression efficiency. But this also
increases the size of the dictionary. So some special care must be
given in choosing the size of the patch.
For every patch, mean value, coefficient, location, the number of
atoms to use are to be recorded. The mean value is recorded and
subtracted from the patch in order to make the patches fit the
dictionary better. Next, the sparse representation is computed by
solving the l0 problem. Here, the coefficients whose values are less
than given threshold are considered as zero. By this, only few
coefficients are required to represent many image patches. So, it is
better than use of fixed number of coefficients.
4.5 Encoding and quantization
The atom number and mean value of each patch is separately
coded. Coding of atom number, mean value, coefficient, their
location is carried out by static arithmetic coders [12].
Lloyd algorithm [11] is used for quantization of coefficients. In
each block, the first coefficient should be quantized with larger
number of bits compared with other coefficients.
The image is also passed through the histogram equalizer, low
pass filter, downsampler. Hence the noise is removed from the
resultant image.

4.3 Training set construction


The third method, K-SVD is the best method to construct the
dictionary. For a good dictionary, the number of training samples
should be high. This is determined based on the value of PSNR.
Higher the PSNR, larger will be the number of training samples [10].
To construct test samples using fingerprint, minutiae (minute
details), ridge frequency, and orientation are to be considered. In
order to achieve good performance, the size of the dictionary is about
27000. However, this algorithm has the ability of learning. Hence we
can add any number of samples in future [2].
Fig.2 is the diagrammatic representation of the steps involved in
fingerprint compression.

PROPOSED METHOD

Usually, fingerprints are obtained by rolling an inked finger on


paper and then scanning it using scanner. In advanced methods, the
fingerprints are scanned with the help of projector and camera. In
either way, all the fingerprints cannot be of same quality.
Applications such as password using fingerprint requires good
quality images while for survey process, the bad quality image is
enough. It is necessary to identify the quality of the fingerprint
images so that the low quality images can be enhanced. The output
efficiency is larger.
There is a necessity for computing sharpness metric, they are
sensitive to blur. The blurred image shows dropping of the metric
value. Singular value decomposition is used for the computation of
metric value [4].

112

INTERNATIONAL JOURNAL FOR TRENDS IN ENGINEERING & TECHNOLOGY


VOLUME 4 ISSUE 2 APRIL 2015 - ISSN: 2349 - 9303
The difference between the matching pairs is computed with the
help of calculating hue and saturation levels in the image [3].
The good quality image is represented in Fig.3.

The original image taken for compression is shown in Fig.6.

Fig.3. Good quality image

The bad quality image is represented in Fig.4.

Fig.6.Original image

The data distributed can be graphically represented using


histogram. Histogram equalization is nothing but a process of
contrast adjustment using histogram of an image. The resultant
image produced after histogram equalization is shown in Fig.7.

Fig.4. Bad quality image

The ugly image is represented in Fig.5,

Fig.7. Histogram equalization

Fig.5. Ugly image

EXPERIMENTAL RESULTS

The fingerprint compression can be achieved through many steps


including histogram equalization, ridge segmentation, low pass filter,
sampling, arrangement based on orientation field.

Fingerprints are comprised of ridges and valleys. Segmentation


of fingerprint into smaller regions is necessary as it helps us to
identify the local image features such as ridges and valleys. The
segmentation is done based on ridges is shown below in Fig.8.

113

INTERNATIONAL JOURNAL FOR TRENDS IN ENGINEERING & TECHNOLOGY


VOLUME 4 ISSUE 2 APRIL 2015 - ISSN: 2349 - 9303

Fig.8.Ridge segmentation

Fig.10.Low pass filtered image

For fingerprints, recognition based on orientation field is necessary.


Because this represents the necessary fields like core, orientation
angle in an image.Fig.9 shows the orientation of the original image.

Sampling is done to convert analog to discrete values.


Downsampling is a method used to reduce the bit rate so that it can
be transmitted over smaller bandwidth. The downsampled image is
shown in Fig.11.

Fig.11.Downsampled image

Fig.9.The orientation field

In order to remove noise from the image, a low pass filter is used.
This filter in turn retains only the low frequency information by
reducing the high frequency information. The low pass filtered
image is shown in Fig.10.

114

INTERNATIONAL JOURNAL FOR TRENDS IN ENGINEERING & TECHNOLOGY


VOLUME 4 ISSUE 2 APRIL 2015 - ISSN: 2349 - 9303

CONCLUSION

A new fingerprint algorithm is introduced. This algorithm works best


especially at high compression rates when compared to other
existing algorithms like JPEG, JPEG 2000, WSQ, etc. Also this
algorithm retains more minutiae details of an image even after
reconstruction. But, our algorithm has more complexities because of
block by block processing method. But it is nothing when compared
with complexities in JPEG method. Since the quality of fingerprint
image is found previously, its efficiency can be greatly increased. If
the image is of poor quality, then it can be increased by some
suitable enhancement techniques.

REFERENCES
[1] Guangqi shaos, Yanping wu, Yong A, Xiao Liu, and Tiande Guo,
Fingerprint compression based on sparse representation, IEEE
Trans., vol.23, no.2, pp.489-501,Feb 2014.
[2] M. Aharon, M.Elad and A.M. Bruckstein, The K-SVD: An
algorithm for designing for designing of overcomplete dictionaries
for sparse representation, IEEE Trans. Signal process, vol.54,
pp.4311-4322, 2006.
[3] Gaurav Aggarwal, Soma Biswas, Patrick J. Flynn and Kevin W.
Bowyer, Predicting Good, Bad and Ugly match pairs.
[4] JitendraChoudary, Dr.Sanjeev Sharma, Jitendra singh Verma, A
New framework for improving low quality fingerprint images Int.
J. Comp. Tech. Appl., Vol 2 (6),1859-1866.
[5] N. Ahamed, T. Natarajan, and K. R. Rao, Discrete cosine
transform, IEEE Trans. Comput., vol. C-23, no. 1, pp.90-93, Jan.
1974.
[6] C.S.Burrus, R.A.Gopinath, and H.Guo, Introduction to Wavelet
and Wavelet Transforms: A Primer .Upper Saddle River, NJ,USA:
Prentice-Hall,1998.
[7] W.Pennebaker and J. Mitchell, JPEG- Still Image Compression
Standard.New York, NY, USA: Van Nostrand Reinhold, 1993.
[8] M.W. Marcellin, M. J. Gormish, A. Bilgin, and M.P.Boliek, An
Overview of JPEG-2000, in Proc. IEEE Data Compress. Conf.,
Mar 200, pp. 523-541.
[9] Ke Huang and Selin Aviyente, Sparse Representation for Signal
Classification, Michigan State University, MI 48824.
[10] Cristian Rusu and Bogdan Dumitrescu, Stagewise K-SVD to
design efficient dictionaries for sparse representation July 182012.
[11] S.Lloyd, Least squares quantization in PCM IEEE
Trans.Inf.Theory, vol.28, no. 2, pp.129-137, Mar. 1982.
[12] K.Sayood, Introduction to Data Compression, 3rd ed.San Mateo,
CA, USA: Morgan Kaufman, 2005, pp. 81-115.

115

Vous aimerez peut-être aussi