Académique Documents
Professionnel Documents
Culture Documents
Ahmed Bouridane
123
Ahmed Bouridane
Queens University, Belfast
Department of Computer Science
Faculty Engineering
Belfast
United Kingdom BT7 1NN
a.bouridane@qub.ac.uk
ISSN 1860-4862
ISBN 978-0-387-09531-8
e-ISBN 978-0-387-09532-5
DOI 10.1007/978-0-387-09532-5
Springer Dordrecht Heidelberg London New York
Library of Congress Control Number: 2009927770
c Springer Science+Business Media, LLC 2009
All rights reserved. This work may not be translated or copied in whole or in part without the written
permission of the publisher (Springer Science+Business Media, LLC, 233 Spring Street, New York,
NY 10013, USA), except for brief excerpts in connection with reviews or scholarly analysis. Use in
connection with any form of information storage and retrieval, electronic adaptation, computer
software, or by similar or dissimilar methodology now known or hereafter developed is forbidden.
The use in this publication of trade names, trademarks, service marks, and similar terms, even if
they are not identified as such, is not to be taken as an expression of opinion as to whether or not
they are subject to proprietary rights.
While the advice and information in this book are believed to be true and accurate at the date of
going to press, neither the authors nor the editors nor the publisher can accept any legal responsibility for any errors or omissions that may be made. The publisher makes no warranty, express or
implied, with respect to the material contained herein.
Printed on acid-free paper
Springer is part of Springer Science+Business Media (www.springer.com)
Preface
The field of security has witnessed an explosive growth during the last years, as
phenomenal advances in both research and applications have been made. Biometric
and forensic imaging applications often involve photographs, videos and other image impressions that are fragile and include subtle details that are difficult to see. As
a developer, one needs to be able to quickly develop sophisticated imaging applications that allow for an accurate extraction of precious information from image data
for identification and recognition purposes. This is true for any type of biometric
and forensic image data.
The applications covered in this book relate to Biometrics, Watermarking and
Shoeprint recognition for forensic science. Image processing transforms using Discrete Fourier Transform, Discrete Wavelet Transforms Gabor Wavelets, Complex
Wavelets, Scale Invariant Feature Transforms and Directional Filter banks are used
in data modelling process for either feature extraction or data hiding tasks. The
emphasis is on the methods and the analysis of data sets including comparative
studies against existing and similar techniques. To make the underlying methods
accessible to a wider audience, we have stated some of the key mathematical results
given in a logical structure of the development.
For example, biometric based methods are emerging as the most reliable solutions for authentication and identification applications where traditional passwords
(knowledge-based security) and ID cards (token-based security) have been used so
far to access restricted systems. Automated biometrics deal with physiological or
behavioural characteristics such as fingerprints, iris, voice and face that can be used
to authenticate a persons identity or establish an identity within a database. With
rapid progress in electronic and Internet commerce, there is also a growing need
to authenticate the identity of a person for secure transaction processing. Current
biometric systems make use of fingerprints, hand geometry, iris, retina, face, facial
thermograms, signature gait, and voiceprint to establish a persons identity. While
biometric systems have their limitations they have an edge over traditional security
methods in that they cannot be easily stolen or shared. Besides bolstering security,
biometric systems also enhance user convenience by alleviating the need to design
and remember passwords.
Driven by the urgent need to protect digital media content that is being widely
and wildly distributed and shared through the Internet by an ever-increasing number
vii
viii
Preface
Preface
ix
This brings some novelty of the topics through a thorough analysis of the results
of the implementation. My indebtedness to those students, in particular W R Boukabou, M Gueham, M Laadjel, M Nabti, O Nibouche, I Thompson, H Su, K Zebbiche
and A Baig of the Speech, Image and Vision Systems (SIVS) group at the School
of Electronics, Electrical Engineering and Computer Science, Queens University
Belfast.
The book is organised as follows. Chapter 1 starts by defining the biometric technology including the characteristics required for a viable deployment using various
operation modes such as verification, identification and watch-list. A number of
currently used biometric modalities are also described with some emphasis of few
emerging ones. Then the various steps of a typical biometric recognition system
are discussed in detail. For example, data acquisition, image localisation, feature
extraction and matching are all defined and the current methods employed for their
implementation and deployment discussed and contrasted. The chapter concludes
by briefly highlighting the need to use appropriate datasets for the evaluation of a
biometric system.
Chapter 2 introduces the notion of data representation in the context of biometrics. The various stages of a typical biometric system are also enumerated and discussed and the most commonly deployed biometric modalities are stated. The chapter also examines various aspects related to image data representation and modelling
for feature extraction and matching. Various methods are then briefly discussed and
brought within the context of a biometric system. For example, image data formats,
feature sets and system testing and performance evaluation metrics are detailed.
In Chapter 3 recent advances in enhancing the performance of face recognition
using the concept of directional filter banks is discussed. In this context, the directional filter banks are investigated as pre-processing phase in order to improve
the recognition rates of a number of different and existing algorithms. The chapter
starts by reviewing the basic face recognition principles and enumerates the various
steps of a face recognition system. Four algorithms representing both Component
and Discriminant Analysis approaches, namely: PCA, ICA (FastICA), LDA and
SDA are chosen for their proven popularity and efficiency to demonstrate the usefulness of the directional filter bank method. The mathematical models behind these
approaches are also detailed. Then the proposed directional filter bank method is
described and its implementation discussed. The results and their analysis are finally
assessed using two well known face databases.
Chapter 4 is concerned with recent advances in iris recognition using a mutiscale
approach. State of the art works in the area is first highlighted and discussed and a
detailed review of the various steps of an automatic iris recognition system enumerated. Proposed developments are then detailed for both iris localisation and classification using an integrated multiscale wavelet approach. Extensive experimentation
is carried out and a comparative analysis with some state of the art approaches
given. The chapter concludes by giving some future directions to further enhance
the results obtained.
In chapter 5, the use of complex wavelets for image and video watermarking is
described. The theory of complex wavelets and their features are first highlighted.
Preface
The concept of spread transform watermarking is then given in detail and its combination with the complex wavelet transforms detailed. Information theoretic capacity
analysis for watermarking with complex wavelets is then elucidated. The chapter
concludes with some experiments and their analysis to demonstrate the improved
levels of capacity that can be achieved through the superior feature representation
offered by complex wavelet transforms.
Chapter 6 discusses the problem of one-bit watermark detection for protecting fingerprint images. Such a problem is theoretically formulated based on the
maximum-likelihood scheme, which requires an accurate modeling of the host data.
The watermarking is applied into the Discrete Wavelet Transform (DWT) due to the
vavious advantages provided by this transform. First, a statistical study of DWT coefficients is carried out by investigating and comparing three distributions, namely,
the generalized Gaussian, Laplacian and Cauchy models. Then, the performances
of the detectors based on these models are assessed and evaluated through extensive
experiments. The results show that the generalized Gaussian is the best model and
its corresponding detector yields the best detection performance.
Chapter 7 is intended to introduce the emerging shoemark evidence for forensic
use. It starts by giving a detailed background of the contribution of shoemark data to
scene of crime officers including a discussion of the methods currently in use to collect shoeprint data. Methods for the collection of shoemarks will also be detailed and
problems associated with each method highlighted. In addition, the chapter gives a
detailed review of existing shoemark classification systems.
In Chapter 8, methods for automatically classifying shoeprints for use in forensic
science are presented. In particular, we propose two correlation based approaches
to classify low quality shoeprints: i) Phase-Only Correlation (POC) which can be
considered as a matched filter, and ii) Advanced Correlation Filters (ACFs). These
techniques offer two primary advantages: the ability to match low quality shoeprints
and translation invariance. Experiments were conducted on a database of images of
100 different shoes available on the market. For the experimental evaluation, challenging test images including partial shoeprints with different distortions (such as
noise addition, blurring and in-plane rotation) were generated. Results have shown
that the proposed correlation based methods are very practical and provide high
performance when processing low quality partial-prints.
Chapter 9 is concerned with the retrieval of scene-of-crime (or scene) shoeprint
images from a reference database of shoeprint images by using a new local feature
detector and an improved local feature descriptor. Similar to most other local feature
representations, the proposed approach can also be divided into two stages: (i) a set
of distinctive local features is selected by first detecting scale adaptive Harris corners
where each corner is associated with a scale factor. This allows for the selection of
the final features whose scale matches the scale of blob-like structures around them
and (ii) for each feature, an improved Scale Invariant Feature Transform (SIFT)
descriptor is computed to represent it. Our investigation has led to the development
of two novel methods which are referred to as the Modified Harris-Laplace (MHL)
detector and the Modified SIFT descriptor, respectively.
Preface
xi
Contributions:
Chapter 2: Data Representation and Analysis
A. Baig and A. Bouridane
Chapter 3: Improving Face Recognition Using Directional Faces
W. R. Boukabou and A. Bouridane
Chapter 4: Recent Advances in Iris Recognition: A Multiscale Approach
M. Nabti and A. Bouridane
Chapter 5: Spread Transform Watermarking Using Complex Wavelets
I. Thompson and A. Bouridane
Chapter 6: Protection of Fingerprint Data Using Watermarking
K. Zebbiche and A. Bouridane
Chapter 7: Shoemark Recognition for Forensic Science: An Emerging
Technology
H. Su and A. Bouridane
Chapter 8: Techniques for Automatic Shoeprint Classification
M. Gueham and A. Bouridane
Chapter 9: Automatic Shoeprint Image Retrieval Using Local Features
H. Su and A. Bouridane
Belfast, United Kingdom, 2008
Ahmed Bouridane
Contents
11
11
12
13
14
15
16
17
17
18
19
xiv
Contents
3.2
22
22
23
26
26
27
28
29
31
31
33
37
37
38
39
41
41
43
45
45
49
49
51
52
52
52
53
55
55
57
65
67
68
68
70
71
72
72
72
73
73
74
75
75
Contents
xv
xvi
Contents
Contents
xvii
Chapter 1
1.1 Introduction
Biometric-based security has been researched and tested for a few decades, but has
only recently entered into the public consciousness because of high-profile applications especially since the events of 9/11. Many companies and government departments are now implementing and deploying biometric technologies to secure areas,
maintain security records, protect borders and maintain law enforcement at borders
and entry points. Biometrics is the science of verifying the identity of an individual
through his/her physiological measurements, e.g. fingerprints, hand geometry, etc.
or behavioural traits, e.g. voice and signature. Since biometric identifiers are associated permanently with the user they are more reliable than token- or knowledgebased authentication methods such as identification card (that can be lost or stolen),
password (that can be forgotten), etc.
Biometric recognition is concerned with methods and tools for the verification
and recognition of a persons identity by means of unique appearance or behavioural
characteristics. This chapter starts by defining the biometric technology including
the characteristics required for a viable deployment using various operation modes
such as verification, identification and watch-list. A number of currently used biometric modalities are also described with some emphasis on a few emerging ones.
Various steps of a typical biometric recognition system are then discussed in detail.
For example, data acquisition, image localisation, feature extraction and matching
are all defined and the current methods employed for their implementation and
deployment are assessed and contrasted. The chapter concludes by briefly highlighting the need to use appropriate data sets for the evaluation of a biometric
system.
The term automated methods means that biometric technologies are implemented completely by a machine (but not always), generally a digital computer.
The second important part from the definition is physiological or behavioural
characteristic, meaning that biometrics tends to recognise people from their biological and behavioural characteristics. In other words, biometrics defines something you are, in contrast to other methods of identification such as something you
have (e.g. cards, keys) or something you know (password, PIN number).
1.2
Definition of Biometrics
Fig. 1.1 Examples of biometric traits that can be used to recognise an individual. Illustrations
in the figure include ear, iris, hand geometry, face, speech, vein, fingerprint, gait and palmprint
traits
admissible [2]. The following sections briefly describe some of the most commonly
used including some emerging biometric traits:
Fingerprint recognition has been used as a biometric trait for many decades. The identification accuracy using fingerprints has been shown to be very high [4]. A fingerprint is the
pattern of ridges and valleys on the surface of a fingertip whose formation is determined
during the first seven months of foetal development. It has been empirically determined that
the fingerprints of identical twins are different and so are the prints on each finger of the
same person [5].
Fingerprint biometrics currently has three main applications: (i) large-scale automated finger imaging systems (AFIS) generally used for law enforcement purposes; (ii) fraud preven-
Iris recognition uses the iris patterns which are the coloured part of the eye,
although the colour has nothing to do with the biometric trait. Iris patterns of a
persons left and right eyes are different, and so are the iris patterns of different
individuals including identical twins [6]. Iris recognition is usually employed as
a verification process due to its low false acceptance rate.
Hand geometry recognition is based on a number of measurements taken from
the human hand such as its shape, size of palm (but not its print), and the lengths
and widths of the fingers. This method is very easy to deploy and is not computationally expensive. However, its low distinctiveness degree and the variability
of its size with age pose major problems [7]. This technology is not very suitable
for identification applications.
Voice recognition is both a physical and behavioural biometric modality. The
physical features of an individuals voice are based on the shape and size of the
appendages (vocal tracts, mouth, nasal cavities, and lips) which are invariant for
an individual, but the behavioural aspect of the speech changes over time due
to age, medical conditions, emotional state, etc. [8]. Speaker recognition is most
appropriate in telephone-based applications but the quality of the voice signal
degrades by the communication channel. The disadvantages of this biometric
trait are (i) it is not suitable for large-scale recognition and (ii) the speech features
are sensitive to the background noise.
Signature recognition is defined as the process of verifying the writers identity
by checking his/her signature against samples kept in a database. The result of
this process is usually a number between 0 and 1 which represents a fit ratio (1
for match and 0 for mismatch). The threshold used for the confirmation/rejection
decision depends on the nature of the application. The distinctive biometric patterns of this modality are the personal rhythm, acceleration and pressure flow
when a person types a specific word or group of words (usually the hand signature of the individual).
Keystroke recognition attempts to assess the users typing style such as the dwell
time (how long each key is depressed), flight time (time between key strokes)
and typical typing errors. Usually this security technology is deployed for computer access within an organisation. The distinctive and behavioural characteristics measured by keystroke recognition also include the cumulative typing speed;
the frequency of the individual in using other keys on the keyboard, such as the
number pad or function keys; and the sequence utilised by the individual when
attempting to type a capital letter.
Gait recognition is the process of identifying an individual by the manner in
which they walk. This modality is less unobtrusive than most others and as such
offers the possibility to identify people at distances without any interaction or
co-operation from the subject thus making it an attractive solution for identi-
1.3
Recognition/Verification/Watch-List
1.3 Recognition/Verification/Watch-List
It is commonly known that a typical biometric recognition scenario, as all biometric
applications, can be classified into one of two types: verification (or authentication)
and identification (or recognition). In some applications, a third scenario may be
added. For example, Phillips et al. in the Face Recognition Vendor Test (FRVT) [9]
define another type called the watch-list.
1.4
Image
Acquisition
Biometric image
Localisation
Biometric
Sub-image
Normalisation
and
Pre-processing
Normalise
Image
Feature
Extraction
Feature
Vector
Matching
Database
Result
body of the biometric image plays a key role in the determination of biometric features, especially for face/iris/palmprint recognition systems based on the
frontal views of images, it may be very helpful if the pre-processing module
normalises the shifts and rotations in the main position.
Image size normalisation: This process aims to align images such that they are
of the same size and are located at the same position and orientation. Resizing is
then performed to set the size of an acquired image to a default image size, say
of 128128, 256256, etc. This step is mostly encountered in systems where
images are processed globally.
Enhancement: This step is not always required but it can be highly useful in two
cases: (i) median filtering for noisy images especially obtained from a camera or
from a frame grabber and (ii) high-pass filtering to highlight the contours of the
image to further improve edge detection performances.
Background removal: This process deals primarily with the most useful information where background should be removed. Masking also can be used to eliminate the sections of the image that are not part of the main image area. This is
done to ensure that the biometric recognition system does not respond to features
corresponding to background, hair, clothing, etc.
1.5
Summary
Hybrid approaches: Just as the human perception system uses both local features
and the whole image region to recognise a biometric image, a machine recognition system should use both.
1.4.4 Matching
The fourth step of a biometric recognition system is to compare the template generated in step three against a database of known features of the biometric application.
In an identification application, this process yields scores that indicate how closely
the generated template matches each of those in the database. In a verification application, the generated template is only compared to one template in the database to
claim the true or false identity of the person.
Finally, the system should determine if the produced score is sufficiently large to
declare a match. The rules governing the declaration of a match are of two types:
(i) manual, where the end-user has to determine if the result is satisfying or not, and
(ii) automatic, in which case the measured distance (the matching score) should be
compared to a predefined threshold so that a match is declared only if the measured
score is higher than the threshold.
1.4.5 Databases
To build/train a biometric recognition algorithm, it is necessary to use a standard test
data set as used by researchers and end-users in order to be able to directly assess and
compare the results. A database is a collection of one or more computer files. For
biometric systems, these files could consist of biometric sensor readings, templates,
match results, related end- user information etc. While there exist many databases
currently in use and which can be found on the Internet or available from academic
or industrial institutions, the choice of an appropriate database should be made based
on the targeted biometric application (face, iris, palmprint, speech, etc.). Another
way is to select the data set specific to the application at hand; for example, how the
algorithm responds to biometric images under varying environment conditions or
how the algorithm operates under different operating scenarios by varying the setup
variables/values.
1.5 Summary
Biometrics aims to automatically identify individuals based on their unique physiological or behavioural traits. A number of civilian and commercial applications
of biometrics-based identification have been deployed in real problems and many
are emerging. These deployments are intended to strengthen the security and convenience in their respective environments. However, a number of legitimate concerns
are also being raised against the use of biometrics for various applications such as
10
References
1. J. Wayman, A. K. Jain, D. Maltoni and D. Maio, Eds., Biometric Systems:Technology,
Design and Performance Evaluation Springer-Verlag, London, UK, 2005.
2. A. K. Jain, P. Flynn and A. A. Ross, Eds., Handbook of Biometrics Springer Science Business Media, LLC, New York, USA, 2008.
3. A. K. Jain, R. Bolle and S. Pankanti, Eds., Biometrics: Personal Identification in Networked
Society Kluwer Academic Publishers, London, UK, 1999.
4. C. Wilson, A. R. Hicklin, M. Bone, H. Korves, P. Grother, B. Ulery, R. Micheals, M. Zoep, S.
Otto and C. Watson, Fingerprint Vendor Technology Evaluation 2003: Summary of results
and analysis report Tech. Report, NIST Technical Report NISTIR 7123, National Institute of
Standards and Technology, June 2004.
5. D. Maltoni, D. Maio, A. K. Jain and S. Prabhakar, Handbook of Fingerprint Recognition
Springer-Verlag, London, UK, 2003.
6. J. D. Woodward, C. Horn, J. Gatune and A. Thomas, Biometrics: A Look at Facial recognition RAND Public Safety and Justice for the Virginia StateCrime Commission, 2003.
7. R. Zunkel, Biometrics: Personal Identification in Networked Society Chapter Hand Geometry Based Authentication, pp. 87102, Kluwer Academic Publishers, London, UK, 1999.
8. J. P. Campbell, Speaker recognition: a tutorial Proceedings of the IEEE, vol. 85, no. 9, pp.
14371462, September 1997.
9. P. J. Phillips, G. Grother, R. J. Micheals, D. M. Blackburn, E. Tabassi and J.M. Bone FRVT
2002: Overview and summary, http://www.frvt.org/frvt2002/documents.htm.
10. D. M. Blackburn, Biometrics 101, version 3.1, vol. 12 Federal Bureau of Investigation,
March 2004.
Chapter 2
2.1 Introduction
The last few years have witnessed the emergence of new tools and means for the
scientific analysis of image-based information for security and forensic science and
crime prevention applications. For instance, images can now be captured, viewed
and analysed at the scenes or in laboratories within minutes whilst simultaneously
making the images available to other experts via fast and secure communication
links on the Internet, thereby making it possible to share information for forensic
and security intelligence and crime linking purposes. In addition, these tools have
a strong link with other aspects of investigation, such as image capture, information interpretation and evidence gathering. They are helpful for minimization of
human error and analysis of data. Although there exist a number of application scenarios, the analysis of data is usually based on a conventional biometric system.
Therefore, the following discussion on a biometric system is given as it would be
a starting point for any other imaging system for use in security and/or forensic
science.
A standard Biometric Identification System consists of the following three
phases: Data Acquisition, Feature Extraction and Matching, and operates in two
distinct modes: Enrolment Mode or Identification Mode [1]. The Data Acquisition stage is used in the enrolment mode to establish the database of users and
their related biometric data whereas in Identification mode it is used to obtain a
reference biometric from the user. This reference biometric is then processed at
the Feature Extraction phase to obtain unique and comparable features. These features are then compared in the Matching phase with the related features of all
the biometric templates in the database to establish or refute the identity of the
user. Figure 2.1 depicts a block diagram view of a basic Biometric Identification
System.
The design of any biometric system is based on decisions regarding the selection
of appropriate modules for each of these processes [1, 3]. Details of these processes
and modules included within these processes along with the critical issues that need
to be addressed before a design decision is made are described below.
11
12
Fingerprint,
Iris, ..etc
Biometric
Sensor
Feature Extraction
Enrolment
Fingerprint,
Iris, ..etc
Biometric
Sensor
Feature Extraction
ID
Database
Matching
Identification/Authentication
Result
2.2
Data Acquisition
13
Table 2.1 Some commonly used biometrics
Physical
Behavioural
Fingerprint:
Most commonly used
Higher false accept rate
Gait:
Face:
Voice:
Easiest to acquire
Difficult to compar
Hand Geometry:
Robust under different conditions
Changes occur with age
Palm Print:
Bigger area of interest
Availability of data set
Iris:
Very Low false accept rate
Difficult to acquire
Ear:
Robust to change
Difficult to acquire
focuses on access control to critical area or application cost may not necessitate a significant consideration but uniqueness and circumvention may be important. Some of the most commonly used biometric traits are identified in the
following Table 2.1.
14
As most biometric systems are imaging based, the quality and maintenance of
the raw captured biometric image also plays an important role in the development
of a strong biometric identification system.
2.3
Feature Extraction
15
reducing the database size and also increasing the access speed. One of the lesser
known advantages of using feature sets is that it is not possible to recreate the actual
raw biometric data from the feature set therefore saving the feature set only provides
personal data protection.
It should be kept in mind that to maintain system openness, the feature sets should
be stored in one of the standard formats like the ones defined in ANSI/NBS ICST
1-1986 for minutiae, ANSI/NIST ITL 1a-1997 for Faisal Feature Set, ANSI/NIST
ITL 1-2006 for Iris, etc. [4, 5]. Using these standards allows for an easy expansion
and upgrade of the system at later times.
16
On the other hand, Local Feature Extractors focus on the chunks of image data.
These algorithms work on small windows within the images and extract the relevant features, e.g. minutiae extraction from skeletonised and binarised fingerprint
images.
A feature extractor algorithm selection is governed mainly by the type of application that the system is being designed for. Applications requiring more accuracy
and security should have a robust and exhaustive feature extractor. However, for
faster applications a simpler algorithm might be the best option. Ideally, the feature
extractor should be very robust, accurate and fast but practically this is not possible.
It is therefore almost always a compromise between accuracy and speed. It is advisable to evaluate multiple feature extraction algorithms to find the optimal algorithm
for the desired application.
The feature extractor algorithm selection also depends upon the type of matcher
being used in the system. The feature extractor should generate output in the format
that the matcher is able to comprehend and process.
As mentioned before, to maintain openness of the system it is prudent to ensure
that the output of the feature extractor should follow a standard format.
2.4 Matcher
A matcher algorithm takes the reference feature set and compares it with all the
template feature sets in the database to provide a matching score for each pair. It
then selects the best templatereference pair and outputs the details as its decision.
Different types of matchers are usually used depending upon the type and format
of the feature set as well as the type of application at hand. Matchers are commonly
categorised into two categories: Time Domain Matchers and Frequency Domain
Matchers [13].
Time Domain Matchers work in the spatial domain and the feature sets for these
types of matchers are generated directly from the raw images.
Frequency Domain Matchers operate in the frequency domain and the feature
sets for these types of matchers are generated by first transforming the image into
the frequency domain and then selecting the features, e.g. wavelets-based matchers,
Fourier transform-based matchers, etc.
However, it is worth noting that correlation-based matchers are the most commonly used matcher algorithms. Similarly, distance-based matchers and supervised
learning or pattern recognition-based matching is also widely used.
Pattern recognition-based matching finds the correct match by training on known
correct and incorrect matching feature sets. In this type of matching the training
process is usually computationally intensive, but if this process is efficiently done,
matching can work very fast and provide highly accurate results.
The selection of the optimal matcher depends upon the application for which the
system is being developed as well as the type of feature sets available for matching.
In addition, information regarding the desired accuracy and the speed required also
plays an important role when selecting a matcher algorithm.
2.6
Performance Evaluation
17
18
Matcher accuracy Accuracy is measured on the test data set to discover how
many of the feature sets are correctly matched by the system. If the genuine score
is above an operating threshold of the system, the feature set is considered to
be correctly matched. Matcher accuracy is usually displayed as a percentage of
matches.
False accept rate (FAR) If an imposter score is above the operating threshold it is called a false accept. FAR, therefore, means that the system accepted an
imposter as a genuine user. FAR is one of the major performance matrixes that have
to be closely evaluated. In fact, effort should be made to keep it as close to zero as
possible.
False reject rate (FRR) If a genuine score is below the threshold then it is called
a false reject. Thus, false reject rate means that the system rejected a genuine user
as an imposter. FRR should ideally be as close to zero as possible but in most access
control applications it is not as critical as FAR. If a user is rejected as an imposter
he/she can always try again but if an imposter is accepted as a genuine user the
integrity of the complete system is compromised.
False alarm rate A statistic used to measure biometric performance when operating in the watch-list (sometimes referred to as open-set identification) task. This is
the percentage of times an alarm is incorrectly sounded on an individual who is not
in the biometric systems database (the system alarms on John when John isnt in
the database) or an alarm is sounded but the wrong person is identified (the system
alarms on Peter when Peter is in the database, but the system thinks Peter is Daniel).
Equal error rate (ERR) It is the point on the ROC curve where FAR and FRR
are equal. For a high-performance system ERR should be as low as possible.
Most vendors provide the performance evaluation in terms of accuracy and ERR.
Some other evaluation criteria are
Failure to capture rate (FCR) FCR pertains to the amount of times a sensor
is unable to capture an image when a biometric trait is presented to it. The FCR
increases with wear and tear to the sensor module. If the FCR increases above a
certain threshold it is advisable to replace the sensor module.
Failure to enrol (FTE) FTE indicates the number of users that were not enrolled
in the system. FTE is usually related to the quality of the biometric image. In most
cases, a system is trained to reject poor quality images. This helps in improving the
accuracy of the system and reducing the FAR and FRR. Every time an image is
rejected the FTE is increased. A trade-off between quality and FTE is required if
the system is to be accepted by the users.
2.7 Conclusion
To develop a strong biometric system it is imperative to select a very stable data
acquisition system and a very secure, fast and robust database. Feature Extractor
and Matcher selection will directly impact the user acceptance of the system and
the selection is based on the type of application.
References
19
References
1. Arun A. Ross, Patrick Flynn and Anil K. Jain, Handbook of Biometrics ISBN: 978-0-38771040-2.
2. A. K. Jain, A. Ross and S. Prabhakar, An introduction to biometric recognition, IEEE Transactions on Circuits and Systems for Video Technology, vol. 14, no. 1, January 2004.
3. J. G. Daugman, Biometric Decision Landscape, Technical Report No. TR482. University of
Cambridge Computer Laboratory, 1999.
4. Data Format for Information Interchange Fingerprint Identification, ANSI/NBS ICST 11986.
5. Data Format for Information Interchange Data Format for the Interchange of Fingerprint,
Facial & SMT Information, ANSI/NIST ITL 1a-1997.
6. H. Meng and C. Xu, Iris Recognition Algorithm Based on Gabor Wavelet Transform, IEEE
International Conference on Mechatronics and Automation, 2006.
7. J. Wayman, A. Jain, D. Maltoni and D. Maio, Biometric Systems Technology, Design and
Performance Evaluation, ISBN: 1852335963.
Chapter 3
3.1 Introduction
Face recognition is one of the most popular applications in image processing and
pattern recognition. It plays a very important role in many applications such as card
identification, access control, mug shot searching, security monitoring and surveillance problems.
There are several problems that make automatic face recognition a very challenging task. The input of a persons face to a recognition system is usually acquired
under different conditions from those of the corresponding image in the database.
Therefore, it is important that an automatic face recognition system can deal with
numerous variations of images of a face. The image variations are usually due to
changes in pose, illumination, expression, age, disguise, facial hair, glasses and
background.
Much progress has been made towards recognising faces under controlled conditions as reported in [1, 2], especially for faces under normalised pose and lighting
conditions and with neutral expression.
The Eigenfaces method [3], based on Principal Component Analysis (PCA), is
one of the most popular methods in face recognition. Its principal idea is to find
a set of orthogonal basis images (called eigenfaces) so that in this new basis, the
image coordinates (the PCA coefficients) are uncorrelated. Independent Component
Analysis (ICA) [4] is one generalisation of PCA and assumes that image data is
independent, and not only uncorrelated as in PCA. Fisherface technique [5] based
on Linear Discriminant Analysis (LDA) is an other popular method. It considers that
each face image in the training set is of a known class and uses this information in
the classification step. Subclass Discriminant Analysis (SDA) is a recent algorithm
devised by Zhu and Martinez [6] where each class of the LDA method is subdivided
into a number of subclasses.
However, recognition of face images acquired in an outdoor environment with
changes in illumination and/or pose remains problematic. Researchers have proposed the utilisation of a pre-processing step in order to extract more discriminant
features for use in the recognition step. Gabor Filter Bank (GFB) is one of the most
well-known methods used for this purpose and many algorithms have been proposed [7, 2]. However, as described in [8], the use of a GFB inherently results in
A. Bouridane, Imaging for Forensics and Security, Signals and Communication
Technology, DOI 10.1007/978-0-387-09532-5 3,
C Springer Science+Business Media, LLC 2009
21
22
some overlapping and missing subband regions. The Directional Filter Bank (DFB),
on the other hand, is a contiguous subband representation that preserves all image
information. Accordingly, a DFB can represent linear patterns, such as those availble around eyes, nose and mouth area, more effectively, than a GFB [9].
This chapter discusses the use of a DFB pre-processing phase in order to improve
the recognition rates of a number of different algorithms. Four algorithms representing both Component and Discriminant Analysis approaches have been selected to
demonstrate the efficiency of the DFBs. In this work, the algorithms PCA, ICA
(FastICA [10]), LDA and SDA are chosen for their popularity and efficiency.
3.2
23
The recognition test works from the assumption that all faces being tested are
of known persons. The percentage of correct identifications is reported as the Correct(or Genuine) Identification Rate (CIR) while the percentage of false identifications is reported as the False Identification Rate (FIR).
3.2.1.3 The Watch-List: Are You Looking for Me?
One important application of the Watch-List task could be comparing a suspect
flight passenger against a database of known terrorists. In this scenario, the person
does not claim any identity, it is an open-universe test. The test person may or may
not be in the system database. The biometric sample of this individual is compared
with the stored samples in a Watch-List database to determine whether the individual concerned is present in the Watch-List. A similarity score is reported for each
comparison. These similarity scores are then numerically and orderly ranked. If a
similarity score is higher than a preset threshold, an alarm is raised and the system
assumes that this person is present in the Watch-List.
There are two factors of interest for a Watch-List application [12]:
Detection and identification rate: the percentage of times the system raises the
alarm and correctly identifies a person on the Watch-List.
False alarm rate: the percentage of times the system raises the alarm for an individual that is not on the Watch-List.
In an ideal system, one wants the false alarm and the detection and identification
rates to be 0 and 100%, respectively.
24
Therefore, face detection is a very important task of any face recognition system
and an efficient detection would enhance the recognition results. The challenges
associated with face detection can be attributed to many factors such as: pose, presence or absence of structural components (facial hair, glasses, et.), facial expression,
occlusions, image orientation, imaging conditions (lighting, camera characteristics).
Many approaches have been proposed to address the face detection problems
[14, 13], and summary is depicted in Table 3.1
3.2.2.2 Normalisation and Pre-processing
The aim of this step is to enhance the quality of the captured images due to one or
many of the factors mentioned in the previous section with a view to allow for a
better recognition power of the recognition system. Depending on the application at
hand, some or all of the following pre-processing techniques may be implemented
in a face recognition system:
Geometrical alignment (translation, rotation).
Image size normalisation.
Filtering (median filtering, high-pass filtering).
3.2
25
Representative work
Knowledge-based
Feature invariant
Facial Features
Texture
Skin Colour
Multiple Features
Template matching
Predefined face templates
Deformable Templates
Appearance based
Eigenfaces
Distribution based
Neural Network
Support Vector Machine
Naive Bayes Classifier
Hidden Markov Model
Information-Theoretical
Approach
Illumination normalisation.
Background removal.
3.2.2.3 Feature Extraction
This is the key step in face recognition in particular and in all pattern recognition
applications in general. Once the face detection task has detected and normalised a
face, the analysis can then take place by capturing the spatial geometry of distinguishing features of the face. There exist different methods to extract identifying
features of a face, but in general they can be classified into three approaches:
Feature-based approaches are based on the extraction of the properties of
individual organs located on a face such as eyes, nose and mouth including
their relationships with each other.
Appearance-based approaches are based on information theory concepts.
These approaches seek a computational model that best describes a face by
extracting the most relevant information contained in the face without dealing with the individual properties of facial organs such as eyes or mouth.
Hybrid approaches are similar to the human perception system which uses
both local features and the whole face region to recognise a face. A machine
recognition system should use both.
Table 3.2 presents some of the principal algorithms developed for feature extraction as described by Zhao and Chellappa [28].
26
Approach
Representative work
Appearance-based methods
Eigenfaces
Probabilistic Eigenfaces
Fisherfaces
SVM
Evolution pursuit
Feature lines
ICA
Kernel faces
Feature-based methods
Pure geometry methods
Dynamic link architecture
Hidden Markov model
Hybrid methods
Modular Eigenfaces
Hybrid LFA
Shape normalised
Component-based
3.2.2.4 Matching
The fourth step of a face recognition system is to compare the template generated in
step three with those in a database of known faces. In an identification application,
this process yields scores indicating how closely the generated template matches
each of those in the database. In a verification application, the generated template is
only compared with one template in the database, that of the claimed identity.
Finally, the system should determine if the produced score is high enough to
declare a match. The rules governing the declaration of a match are of two types: a
manual one where the end user has to determine if the result is satisfying or not and
an automatic type in which the measured distance (the matching score) should be
compared to a predefined threshold and a match is declared if the measured score is
higher than this threshold.
3.3
Previous Work
27
projected onto the face space to find a set of weights that describes the contribution
of each vector in the face space. To identify a test image, the projection of the test
image onto the face space is required to obtain the corresponding set of weights. By
comparing the weights of the test image with the set of weights of the faces in the
training set, the face in the test image can be identified.
The key procedure in PCA is based on KarhunenLoeve (KL) transformation. If
the image elements are considered to be random variables, then the image may be
seen as a sample of a stochastic process. The PCA basis vectors are defined as the
eigenvectors of the covariance matrix C:
C = E[X X T ]
(3.1)
Since the eigenvectors associated with the largest eigenvalues have face-like
images, they also are referred to as Eigenfaces. Specifically, suppose the eigenvectors of C are u 1 , u 2 , . . . , u n and are associated respectively with the eigenvalues
1 2 . . . n . Then
X=
n
x i u i
(3.2)
i=1
m
x i u i
(3.3)
i=1
28
X = AS
(3.4)
where A is a mn matrix of full rank, called the mixing matrix. In feature extraction,
the columns of A represent features, and si is the coefficient of the ith feature in the
observed data vector X .
There are several methods to compute the ICA. Here FastICA [10] is used
because of its fast convergence during the estimation of the parameters.
The FastICA method computes the independent components by maximising nonGaussianity of whitened data distribution using a kurtosis maximisation process.
The kurtosis measures the non-Gaussianity and the sparseness of the face representations [48]. The idea is to estimate the independent source signals U by computing
a separating matrix W where U = W X = W AS. First, the observed samples are
centred and whitened, this means that the data has a mean equal to zero and a standard deviation equal to one. Let us denote the centered and whitened samples by Z.
Then, one needs to search for the W matrix such that the linear projection of the
whitened samples by the matrix W has maximum non-Gaussianity of data distribution. The kurtosis of Ui = WiT Z is computed as:
K (Ui ) = E(Ui )4 3(E(Ui )2 )2
(3.5)
|W T S B W |
|W T SW W |
(3.6)
c
i=1
Ni (i )(i )T
(3.7)
3.3
Previous Work
29
SW =
c
(xk i )(xk i )T
(3.8)
i=1 xX i
(3.9)
30
Fig. 3.2 A two-class problem when one of the classes is a mixture of two Gaussians
order to divide the samples into a set of subclasses (clusters). Although, there exist
many clustering methods, it is accepted that the Nearest Neighbour (NN) method
yields superior or equivalent results when compared against other parametric methods such as K-means and Gaussian mixtures; or non-parametric clustering methods
such as the Valley-seeking algorithm of Koontz and Fukunaga [49]. In addition, the
NN-clustering is efficient because it can also be used when the number of samples
in each class is either large or small, and it does not require large computational
resources [6].
3.3.4.2 NN-Clustering
In a NN-Clustering approach the first step consists of sorting the feature vectors (i.e.
face images in our case) so that a set {xi1 , xi2 , . . . , xini } is constructed as follows: if
xi1 and xini are the two most distant feature vectors: arg max jk xi j xik 2 where
x2 is a norm-2 length of x with xi2 being the closest feature vector to xi1 and
xi(n c1 ) the closest feature vector to xin c . In general, xi j is the ( j 1)th closest
feature vector to xi1 .
Once this done, the sorted set {xi1 , xi2 , . . . , xini } is divided into M subclasses Hi
where i = 1, . . . M. For example, data can be divided into two equally balanced (in
the sense of having the same number of samples) clusters (H1 and H2 ) by simply
partitioning the sorted set into two parts:{xi1 , . . . , xi,ni /2 } and {xi,(ni /2)+1 , . . . , xini }.
More generally, one can divide each class into h (equally balanced) subclasses;
i.e. Hi = hi. This is suitable for such a case where the underlying distribution
3.4
31
1
G(, ) = exp
2
( W )2
2
+
2
2
(3.10)
(3.11)
(3.12)
x = a m (x cos + y sin )
(3.13)
y = a m (x sin + y cos )
(3.14)
32
Uh
Ul
1
S1
(a 1)Uh
(a + 1) 2 ln(2)
1
22
[2 ln(2)]2 2 2
2 ln(2)
Uh 2 ln
= tan
2k
Uh
Uh2
mu =
(3.15)
(3.16)
(3.17)
I (x1 , y1 )gmn
(x x1 , y y1 )d x1 dy1
(3.18)
where gmn
indicates the complex conjugate of gmn . The Gabor wavelet transformation of the facial image is calculated at S scales, m {0, 1, 2, . . . , S} and K
different orientations, n {0, 1, 2, . . . , K } and let us set Ul = 0.05 and Uh = 0.4.
Wmn denotes a Gabor wavelet transformation of a face image at the scale m and
orientation n. Figure 3.3 shows a sample face image from the database and its forty
filtered images (five scales: S = 5 and eight orientations: K = 8 have been taken).
The augmented Gabor-face vector can then be defined as follows [54]:
t
t
, . . . , W S,K
)t
= (W0,0
(3.19)
where t is the transpose operator. The augmented Gabor-face vector can encompass
all facial Gabor wavelet transformations, and has important discriminatory information that can be used in the classification step.
3.4
33
Fig. 3.3 Gabor filters (a) A face image from the database, (b) The filtered images: five scales and
eight orientations
34
(a)
(b)
5
4
5
5
6
6
8
7
Fig. 3.5 The frequency partition map for an eight-band DFB. (a) Input (b) Eight subband outputs
At this point, the output is used as the input for the next stage. Each of the subbands in the analysis part extracts frequency components based on the associated
frequency partition map as shown in Fig. 3.5.
In the synthesis bank, the dual operation is performed, i.e. the directional subband images are combined into a reconstructed image in the reverse order of the
analysis stage to enable a perfect reconstruction of the signal. However, it is important to mention that, in this work, we are only interested in the analysis section
since our goal is to extract discriminant features from each directional image. The
components of the analysis part are the downsampler D and the analysis filters
H0 and H1 .
3.4.2.1 Analysis Filters
One of the attractive features of the DFB is the fact that it can be implemented by
one filter prototype only. By using carefully designed unimodular matrices, the filter
design process can be reduced to require only one filter prototype H0 (). Therefore,
if the unimodular matrices which change the frequency components from R0i () to
H0 (), for i = 1, 2, 3, and 4, respectively, are determined (Fig. 3.6), then the systems in Fig. 3.7(a,b) are identical and only one filter prototype H0 () is required.
Consequently, H0 () can replace the four remaining filters R0i () using the unimodular matrices.
3.4
35
Fig. 3.7 Two identical structures in a DFB. (a) using R0i () alone and (b) using a unimodular
matrix with H0 ()
1 1
1 1
(3.20)
Simply speaking, a quincunx downsampling corresponds to a rotated downsampling. Figure 3.8 shows the original Lena image and its corresponding quincunx
downsampled image by Q 1 .
3.4.2.3 Overview of band DFB
The Four-band DFB: A four-band DFB is composed of two-band DFBs (Fig. 3.9)
arranged in a tree-like structure. After the modulator, the constituent frequency components are shifted, resulting in a diamond-like shape. Then, via the diamond filters, H0 () and H1 (), each of the four frequency regions is filtered then downsampled by a quincunx downsampler. By cascading another set of two-band DFBs
at the ends of the first two-band DFB, a four-band directional decomposition is
obtained.
The 2n -band DFB: Two-band and four-band DFBs lead to 2n -band extensions.
To expand to eight bands, one can apply a third stage in a cascade fashion. With
an input whose directional frequencies are labeled as shown in Fig. 3.5(a), an
36
eight-band DFB generates the eight subband outputs shown in Fig. 3.5(b). It is worth
noting that each of the subband images is smaller than the original input, which is
necessary to ensure a maximal DFB decimation.
3.5
37
images, noise in the original image is divided into four different directions, thus
reducing its energy by a factor of four [9].
38
Fig. 3.11 Directional images generated by DFB. (a) Directional Image 1, (b) Directional Image 2,
(c) Directional Image 3, (d) Directional Image 4
1
0.9
0.8
Recognition Rate
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
N=2
N=3
N=4
N=5
DFB Decomposition Level
N=6
N=7
3.5.2 PCA
In this experiment the original face database is used to extract features using the traditional Eigenfaces algorithm. The recognition rate is calculated for all the remaining faces in the database. The same system is maintained and applied to a new
database obtained after DFB pre-processing. An NN algorithm using Euclidian
distance is used to compute the distances between the different feature vectors.
3.5
39
Table 3.3 Experiment results for the DFBPCA method and comparison with the PCA algorithm.
Faces
PCA(%)
DFBPCA(%)
Improvement(%)
Normal
No glasses
Wink
Glasses
Sleepy
Surprised
Sad
Left-light
53.33
60
53.33
60
60
60
53.33
13.33
86.67
86.67
86.67
73.33
86.67
80
86.67
33.33
+62.50
+44.45
+62.50
+22.22
+33.33
+33.33
+62.50
+150.04
51.67
77.50
+49.99
Table 3.3 shows the results of this experiment over all the different expressions
and lighting conditions of the face images in the database.
Note that the improvement mentioned in Table 3.3 is a relative improvement and
can be calculated from the following equation:
I mpr ovement =
Rate(D F B S D A) Rate(S D A)
Rate(S D A)
(3.21)
It can be seen from Table 3.3 that low recognition accuracies are obtained for
both methods (i.e. PCA alone PCA with DFB pre-processing). However, it is interesting to remark that the worst results are obtained for faces with changes in lighting conditions (only 13% for PCA), but while using the Directional Filters, the
recognition rate has been improved by more than 150%. A general increase in
the recognition accuracy of around 50% over all the faces is enough to conclude
that a DFB implementation outperforms significantly its Eigenfaces counterpart
algorithm.
Figure 3.13 illustrates the results of an experiment conducted to show how the
database size affects the recognition accuracy. To do so, 15 image faces are randomly chosen from the Yale Database as test images while the number of reference
images per person is increased each time by one. A comparison with the GFB [7]
approach has been made to demonstrate that the proposed method clearly outperforms the other pre-processing algorithms even when database size is important.
3.5.3 ICA
This experiment is performed as with the PCA but using the FastICA algorithm
instead of the Eigenfaces algorithm. The results obtained are reported in Table 3.4
and the effect of the database size with a comparison with the GFB approach is
showed in Fig. 3.14.
40
Recognition Rate
0.7
0.6
0.5
0.4
PCA
GFBPCA
DFBPCA
0.3
0.2
0.1
0
30
45
60
Database Size
75
90
The ICA technique using both approaches significantly outperforms the PCA. In
addition, it can also be seen that DFB is able to improve further the ICA especially
for situations in which large facial changes occur (light source, glasses, etc.). An
overall recognition rate of 80.83% is obtained for the combined ICADFB method
with an overall improvement of 12.78%. This result clearly demonstrates the discriminating strength of a DFB pre-processing step.
Table 3.4 Experiment results for the DFBICA method and comparison with the ICA algorithm
Faces
ICA(%)
DFBICA(%)
Improvement(%)
Normal
No glasses
Wink
Glasses
Sleepy
Surprised
Sad
Left-light
80
73.33
86.67
66.67
93.33
66.67
93.33
13.33
93.33
80
86.67
73.33
93.33
86.67
100
33.33
+16.67
+09.10
0
+10
0
+30
+7.15
+150.04
71.67
80.83
+12.78
3.5
41
1
0.9
0.8
Recognition Rate
0.7
0.6
0.5
0.4
0.3
0.2
ICA
GFBICA
DFBICA
0.1
0
30
45
60
Database Size
75
90
3.5.4 LDA
It is well known that the main problem with principal component methods (PCA and
ICA) is the fact that they have no information about the class of each vector in the
training database. This means that each face image is treated separately. This disadvantage has been resolved when using the LDA method since all the face images for
one person are considered as one class. The same procedure is used as in the previous cases and the results obtained are showed in Table 3.5. A comparison with the
Gabor approach is also illustrated in Fig. 3.15. The results obtained clearly show the
LDA technique using both approaches (with and without DFB pre-processing) significantly outperforms the PCA. In addition, it can also be seen that DFB is able to
improve further the LDA especially when significant changes in the image occur. An
overall recognition rate of 91.67% is obtained for the combined LDADFB method
with an overall improvement of 4.77% which clearly demonstrates the discriminating strength of a DFB pre-processing step.
3.5.5 SDA
The principal idea of SDA is to divide each class (of the original LDA algorithm)
into multiple subclasses. This property is very interesting in our method, since, from
42
Table 3.5 Experiment results for the DFBLDA method and comparison with the ICA algorithm
Faces
LDA(%)
DFBLDA(%)
Improvement(%)
Normal
No glasses
Wink
Glasses
Sleepy
Surprised
Sad
Left-light
93.33
80
93.33
80
93.33
80
100
80
93.33
93.33
100
86.67
93.33
93.33
93.33
80
0
+16.67
+8.22
+8.34
0
+16.67
6.67
0
87.50
91.67
+4.77
each face image in the database, 2n directional images are generated (with n being
the order of the DFB). The best application of the SDA is to place all the directional faces of a person into the same subclass. Figure 3.16 shows the proposed
scheme for this method. To assess the performance of the method, the same steps
used in the previous approaches are followed: the original face database is used to
extract the features using the SDA algorithm as proposed in [6] and the recognition
rate is calculated for all remaining faces in the database. A combined DFBSDA
method is used as illustrated in Fig. 3.16 to compute the new recognition rates.
1
0.9
0.8
Recognition Rate
0.7
0.6
0.5
0.4
0.3
0.2
LDA
GFBLDA
DFBLDA
0.1
0
30
45
60
Database Size
75
90
3.5
43
SDA(%)
DFBSDA(%)
Improvement(%)
Normal
No glasses
Wink
Glasses
Sleepy
Surprised
Sad
Left-light
93.33
86.67
93.33
93.33
100
93.33
100
73.33
93.33
86.67
100
100
100
93.33
93.33
93.33
0
0
+8.22
+8.22
+0
0
6.67
+27.27
91.67
95.83
+4.54
The results obtained for both SDA and DFBSDA methods and the improvement
observed for different poses in the database are depicted in Table 3.6. The results
obtained demonstrate that a combined DFBSDA approach improves the recognition rate obtained when applying the SDA algorithm alone by 4.54%. In addition,
with an overall recognition rate of 95.83%, it can also be concluded that the idea of
dividing the classes into subclasses is compatible with DFB-based pre-processing.
Figure 3.17 shows the effect of database growing on the global recognition rate and
a comparison with the Gabor approach.
44
Recogntion rate
0.7
0.6
0.5
0.4
0.3
0.2
SDA
GFBSDA
DFBSDA
0.1
0
30
45
60
Database size
75
90
PCA, LDA, ICA and SDA (alone and pre-processed by the DFB) algorithms are
applied on the following database sizes: 50, 100, 200 and 300 using only one image
by person as reference. The average recognition rate is then calculated for all tests.
Table 3.7 depicts the experimental results obtained. From the table, it can be seen
that DFBs improve the results obtained using a larger database with varying conditions such as head rotation and face sizes. Overall, the improvements for the different algorithms are all over 13% which is very satisfying.
References
45
Table 3.7 Experiment results for the different methods with the FERET Database
Method
PCA(%)
ICA(%)
LDA(%)
SDA(%)
Without DFB
With DFB
Improvements
72.33
84.89
+17.36
61.77
74.00
+19.80
71.67
81.11
+13.17
74.22
84.90
+14.39
3.6 Conclusion
This chapter proposes a new method to enhance existing face recognition methods
such as PCA, ICA, LDA and SDA by using a DFB pre-processing. The results have
shown that this pre-processing step yields robustness against changes in expressions
and illumination conditions. This step can also be very helpful when the number of
face images in the database is insufficient since the number of images will increase
by a factor of 2n (n is the order of the DFB), thus providing more discriminant power
for the classification phase. It has been shown that this method is at least as good as
all the other approaches including those with GFB pre-processing.
The effect of DFB pre-processing is significant for the YALE and FERET
databases. This is demonstrated by overall recognition rate improvements varying
from 4.54% for the SDA algorithm to 49.99% for the PCA.
The efficiency of the proposed method has been demonstrated by improvements
of (Yale=49.99%, FERET=17.36%) for PCA, (Yale=12.78%, FERET=19.80%)
for ICA, (Yale=4.77%, FERET=13.17%) for LDA and (Yale=4.54%,
FERET=14.39%) for SDA. A recognition rate of 95.83% has been obtained
for the SDA algorithm combined with DFB pre-processing.
References
1. P. J. Phillips, P. J. Flynn, T. Scruggs, K. W. Bowyer, J. Chang, K. Hoffman, J. Marques,
J. Min and W. Worek, Overview of the face recognition grand challenge, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 947954,
June 2005.
2. A. Rosenfeld, W. Zhao, R. Chellappa and P. J. Phillips, Face recognition: A literature survey,
ACM Computing Surveys, vol. 35, no. 4, pp. 399458, 2003.
3. M. Turk and A. Pentland, Eigenfaces for recognition, Journal of Cognitive Neuroscience,
vol. 3. no. 1, pp. 7186, 1991.
4. M. S. Bartlett, J. R. Movellan and T. J. Sejnowski, Face recognition by independent component analysis, IEEE Transactions on Neural Networks, vol. 13, no. 6, pp. 14501464,
November 2002.
5. P. N. Belhumeur, J. P. Hespanha and D. J. Kriegman, Eigenfaces vs. fisher-faces: Recognition using class specific linear projection, IEEE Transactions Pattern Analysis and Machine
Intelligence, vol. 19, no. 7, pp. 711720, July 1997.
6. M. Zhu and A. M. Martinez, Subclass discriminant analysis, IEEE Transactions on Pattern
Analysis and Machine Intelligence, vol. 28, no. 8, pp. 12741286, August 2006.
46
7. W. R. Boukabou, L. Ghouti and A. Bouridane, Face recognition using a Gabor filter bank
approach, First NASA/ESA Conference on Adaptive Hardware and Systems, pp. 465468,
June 2006.
8. C. H. Park, J. J. Lee, M. Smith, S. Park and K. H. Park, Directional filter bank based fingerprint feature extraction and matching, IEEE Transactions on Circuits and Systems For Video
Technology, vol. 14, pp. 7485, January 2004.
9. M. A. U. Khan, M. K. Khan, M. A. Khan, M. T. Ibrahim, M. K. Ahmed and J. A.
Baig, Improved pca based face recognition using directional filter bank, IEEE INMIC,
pp. 118124, December 2004.
10. X. Yi qiong, L. Bi cheng and W. Bo, Face recognition by fast independent component analysis and genetic algorithm, IEEE on Computer and Information Technology, vol. 28, no. 8,
pp. 194198, August 2004.
11. P. J. Phillips, G. Grother, R. J. Micheals, D. M. Blackburn, E. Tabassi and J. M. Bone, Frvt
2002: Overview and summary. http://www.frvt.org/FRVT2002/documents.htm, March 2003.
12. D. M. Blackburn, Biometrics 101, version 3.1, volume 12. Federal Bureau of Investigation,
March 2004.
13. M. H. Yang, D. J. Kriegman and N. Ahuja, Detecting faces in images: A survey, IEEE
Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 1, p. 3458, January
2002.
14. J. Fagertun, Face Recognition. PhD thesis, Technical University of Denmark, 2006.
15. G. Yang and T. S. Huang, Human face detection in complex background, Pattern Recognition, vol. 27, no. 1, pp. 5363, 1994.
16. K. C. Yow and R. Cipolla, Feature-based human face detection, Image and Vision Computing, vol. 15, no. 9, pp. 713735, 1997.
17. Y. Dai and Y. Nakano, Face-texture model based on sgld and its application in face detection
in a color scene, Pattern Recognition, vol. 29, no. 6, pp. 10071017, 1996.
18. S. McKenna and S. Gong and Y. Raja, Modelling facial colour and identity with Gaussian
mixtures, Pattern Recognition, vol. 31, no. 12, pp. 18831892, 1998.
19. R. Kjeldsen and J. Kender, Finding skin in color images, Automatic Face and Gesture
Recognition, pp. 312317, 1996.
20. T. Craw, D. Tock and A. Bennett, Finding face features, Proceeding of Second European
Conference on Computer Vision, pp. 9296, 1992.
21. A. Lanitis, C. J. Taylor and T. F. Cootes, An automatic face identification system using flexible appearance models, Image and Vision Computing, vol. 13, no. 5, pp. 393401, 1995.
22. K.-K. Sung and T. Poggio, Example-based learning for view-based human face detection,
IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 1, pp. 3951,
January 1998.
23. H. Rowley, S. Baluja and T. Kanade, Neural network-based face detection, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 1, pp. 2338, January 1998.
24. E. Osuna, R. Freund and F. Girosi, Training support vector machines: An application to face
detection, Proceeding of IEEE Conference on Computer Vision and Pattern Recognition,
pp. 130136, 1997.
25. H. Schneiderman and T. Kanade, Probabilistic modeling of local appearance and spatial
relationships for object recognition, Proceeding of IEEE Conference on Computer Vision
and Pattern Recognition, pp. 4551, 1998.
26. A. Rajagopalan, K. Kumar, J. Karlekar, R. Manivasakan, M. Patil, U. Desai, P. Poonacha
and S. Chaudhuri, Finding faces in photographs, Proceeding of Sixth IEEE International
Conference on Computer Vision, pp. 640645, 1998.
27. A. J. Colmenarez and T. S. Huang, Face detection with information-based maximum discrimination, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition,
pp. 782787, 1997.
28. W. Zhao and R. Chellappa, Face Processing: Advanced Modeling and Methods, Academic
Press, New York, 2006.
References
47
29. B. Moghaddam and A. Pentland, Probabilistic visual learning for object representation,
IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 696710,
July 1997.
30. P. J. Phillips, Support vector machines applied to face recognition, Proceedings of the 1998
conference on Advances in neural information processing systems, pp. 803809, 1998.
31. C. Liu and H. Wechsler, Evolutionary pursuit and its application to face recognition, IEEE
Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 6, pp. 570582, June
2000.
32. S. Z. Li and J. Lu, Face recognition using the nearest feature line method, IEEE Transactions
on Neural Networks, vol. 10, no. 2, pp. 439443, March 1999.
33. M. S. Bartlett, H. M. Lades and T. J. Sejnowski, Independent component representation
for face recognition, Proceedings of SPIE Symposium on Electronic Imaging: Science and
Technology, pp. 528539, 1998.
34. M.-H. Yang, Kernel eigenfaces vs. kernel fisherfaces: Face recognition using kernel methods, FGR 02: Proceedings of the Fifth IEEE International Conference on Automatic Face
and Gesture Recognition, IEEE Computer Society, Washington, DC, USA, p. 215, 2002.
35. M. D. Kelly, Visual identification of people by computer, Tech. rep. AI-130, Stanford AI
Project, Stanford, CA, 1970.
36. T. Kanade, Picture processing system by computer complex and recognition of human faces,
In doctoral dissertation, Kyoto University, November 1973.
37. T. Kanade, Computer recognition of human faces, Interdisciplinary Systems Research,
vol. 47, 1977.
38. I. J. Cox, J. Ghosn and P. N. Yianilos, Feature-based face recognition using mixturedistance, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition,
pp. 209216, 1996.
39. B. S. Manjunath, R. Chellappa and C. V. D. Andmalsburg, A feature based approach to face
recognition, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition,
pp. 373378, 1992.
40. K. Okada, J. Steffans, T. Maurer, H. Hong, E. Elagin, H. Neven and C. Von der Malsburg, The Bochum/USC Face Recognition System And How it Fared in the FERET
Phase III test. In H. Wechsler, P. J. Phillips, V. Bruce, F. Fogeman Soulie and T.
S. Huang, editors, Face Recognition: From Theory to Applications. Springer-Verlag,
pp. 186205, 1998.
41. L. Wiskott, J.-M. Fellous and C. Von Der Malsburg, Face recognition by elastic bunch graph
matching, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, no. 7,
pp. 775779, January 1997.
42. F. Samaria, Face Recognition Using Hidden Markov Models. PhD thesis, University of Cambridge, UK, 1994.
43. A. V. Nefian and M. H. Hayes, Hidden Markov models for face recognition, In Proceedings of International Conference on Acoustics, Speech and Signal Processing, pp. 27212724,
1998.
44. A. Pentland, B. Moghaddam and T. Starner, View-based and modular eigenspaces for face
recognition, Proceeding of IEEE Conference on Computer Vision and Pattern Recognition,
pp. 8491, June 1994.
45. P. Penev and J. Atick, Local feature analysis: A general statistical theory for object representation, Network: Computation in Neural Systems, vol. 7, pp. 477500, June 1996.
46. J. Huang and B. Heisele, Scomponent-based face recognition with 3d morphable models, In
Proceedings of International Conference on Audio-and Video-Based Person Authentication,
2003.
47. M. Turk and A. Pentland, Face recognition using eigenfaces, IEEE Conference on Computer
Vision and Pattern Recognition, pp. 586591, June 1991.
48. A. J. Bell and T. J. Sejnowski, The independent components of natural scenes are edge filters
Vision Research, vol. 37, no. 23, pp. 33273338, 1997.
48
49. K. Fukunaga, Introduction to Statistical Pattern Recognition, (2nd edition). Academic Press,
New York, 1990.
50. A. Buja, T. Hastie and R. Tibshirani, Penalized discriminant analysis, Annals of Statistics,
vol. 23, pp. 73102, 1995.
51. T. Hastie, R. Tibshirani and A. Buja, Flexible discriminant analysis by optimal scoring,
Journal of the American Statistical Association, vol. 89, pp. 12551270, 1994.
52. G. Baudat and F. Anouar, Generalized discriminant analysis using a kernel approach, Neural
Computation, vol. 12, pp. 23852404, 2000.
53. B. S. Manjunath and W. Y. Ma, Texture features for browsing and retrieval of image data,
IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 18, no. 8, pp. 837842,
August 1996.
54. G. Dai and C. Zhou, Face recognition using support vector machines with the robust feature,
The 2003 IEEE International Workshop on Robot and Human interactive Communication
Millbrae, California, USA, 2003.
55. S. Park, New Directional Filter Banks and Their Applications in Image Processing. PhD
thesis, Georgia Institute of Technology, 1999.
56. P. J. Phillips, H. Moon, P. J. Rauss S. and Rizvi, The feret evaluation methodology for face
recognition algorithms, IEEE Transactions on Pattern Analysis and Machine Intelligence,
vol. 22, no. 10, pp. 10901104, October 2000.
57. Department of Computer Science Yale University. The Yale face database.
http://cvc.yale.edu/projects/yalefaces/yalefaces.html.
Chapter 4
4.1 Introduction
Consistent and protected identification of a person is a key subject in security. In
government and conventional environments, security is usually provided through
badges, provision of information for visitors and issuing of keys. These are the most
common means of identification since they are the easiest to remember and the easiest to confirm. However, these solutions are the most unreliable putting all components of security at risk. IDs can be stolen, passwords can be forgotten or cracked. In
addition, security breaches resulting in access to restricted areas of airports or other
sensitive areas are a source of concern for terrorist activities. Although there are
laws against false identification, incidents of invasions and unauthorised modifications to information occur daily with catastrophic effects. For example, credit card
fraud is rapidly increasing and traditional technologies are not sufficient to reduce
the impact of counterfeiting and/or security breaches therefore a more secure technology is needed to cope with the drawbacks and pitfalls [1]. Biometrics, the use of
biology, which deals with data statistically, provides a powerful answer to this need,
since the uniqueness of an individual arises from his/her personal or behavioural
characteristics with no passwords or numbers to remember. These include fingerprint, retinal and iris scanning, hand geometry, voice patterns, facial recognition
and other techniques. Typically a biometric recognition system records data from a
user and performs a comparison each time the user attempts to claim his/her identity.
Biometric systems can be classified into two operating modes: verification and
identification modes. In the verification mode, the user claims an identity and
the system compares the extracted features with the stored template of the asserted
identity to determine if the claim is true or false. In the identification mode, no
identity is claimed and the extracted feature set is compared with the templates of all
the users in the database in order to recognise the individual. For such approaches
to be widely applicable, they must be highly reliable [2]. Reliability relates to the
ability of the approach to support a signature that is unique to an individual and
that can be captured in an invariant manner over time. The use of biometric traits
require that a particular biometric factor be unique for each individual that it can be
readily measured, and that it is invariant over time. Biometrics such as signatures,
A. Bouridane, Imaging for Forensics and Security, Signals and Communication
Technology, DOI 10.1007/978-0-387-09532-5 4,
C Springer Science+Business Media, LLC 2009
49
50
photographs, fingerprints, voiceprints and retinal blood vessel patterns, all have significant drawbacks. Although signatures and photographs are cheap and easy to
obtain and store, they are insufficient to identify automatically with assurance, and
can be easily forged. Electronically recorded voiceprints are susceptible to changes
in a persons voice, and they can be counterfeited. Fingerprints or handprints require
physical contact, and they also can be counterfeited and marred by artifacts [2].
It is currently accepted within the biometric community that biometrics has the
potential for high reliability because it is based on the measurement of an intrinsic physical property of an individual. Fingerprints, for example, provide signatures that appear to be unique to an individual and reasonably invariant with age,
whereas faces, while fairly unique in appearance can vary significantly with time
and place. Invasiveness, the ability to capture the signature while placing as few
constraints as possible on the subject of evaluation, is another constraint. In this
regard, acquisition of a fingerprint signature is invasive as it requires that the subject makes physical contact with a sensor, whereas images of a subjects face or
iris that are sufficient for recognition can require a comfortable distance. Considerations of reliability and invasiveness suggest that the human iris is a particularly
interesting structure on which to base a biometric approach for personnel verification and identification [3]. From the point of view of reliability, the special patterns
that are visually apparent in the human iris are highly distinctive to an individual;
the appearance of a subjects iris suffers little from day to day variations. In addition, the method is non-invasive since the iris is an overt body that can be imaged
at a comfortable distance from a subject with the use of extant machine vision technology. Owing to these features of reliability and non invasiveness, iris recognition is a promising approach to biometric-based verification and identification of
people [2].
An authentication system on iris recognition is reputed to be the most accurate
among all biometric methods because of its acceptance, reliability and accuracy.
Ophthalmologists originally proposed that the iris of the eye might be used as a
kind of optical fingerprint for personal identification [4]. Their proposal was based
on clinical results that every iris is unique and it remains unchanged in clinical
photographs. The human iris begins to form during the third month of gestation and
is complete by the eighth month, though pigmentation continues into the first year
after birth. It has been discovered that every iris is unique since two people (even
two identical twins) have uncorrelated iris patterns [5], and yet stable throughout
the human life. It is suggested in recent years that the human irises might be as
distinct as fingerprint for different individuals, leading to the idea that iris patterns
may contain unique identification features.
In 1936, Frank Burch, an ophthalmologist, proposed the idea of using iris patterns
for personal identification [6]. However, this was only documented by James Doggarts in 1949. The idea of iris identification for automated recognition was finally
patented by Aran Safir and Leonard Flom in 1987 [6]. Although they had patented
the idea, the two ophthalmologists were unsure as to a practical implementation of
the system. They commissioned John Daugman to develop the fundamental algorithms in 1989. These algorithms were patented by Daugman in 1994 and now
4.2
51
form the basis for all current commercial iris recognition systems. The Daugman
algorithms are owned by Iridian Technologies and they are licensed to several other
companies [6].
52
Bae et al. [10] projected the iris signals onto a bank of basis vectors derived by
independent component analysis and quantised the resulting projection coefficients
as features. In another approach by Li Ma et al., Multichannel [9] and Even Symmetry Gabor filters [4] were used to capture local texture information of the iris, which
is used to construct a fixed length feature vector. Nearest feature line method is used
for iris matching. In [21] a set of 1D intensity signals is constructed to effectively
characterise the most important information of the original 2D image using a particular class of wavelets; a position sequence of local sharp variation points in such
signals is recorded as features. A fast matching scheme based on an exclusive OR
operation is used to compute the similarity between a pair of position sequences.
4.3
Iris Localisation
53
eyelid should be included. The eyelid boundary can also be irregular due to the presence of eyelashes. From these suggestions, it can be said that in iris segmentation a
wide range of edge contrasts must be taken into consideration, and iris segmentation
must be robust and effective.
(4.1)
ds .
max(r, x0 , y0 ) G (r )
r (r,x0 ,y0 ) 2r
The operator pixel-wise searches throughout the raw input image, I(x,y), and
obtains the blurred partial derivative of the integral over normalised circular contours in different radii. The pupil and limbus boundaries are expected to maximise
the contour integral derivative, where the intensity values over the circular borders
would make a sudden change.
G (r) is a smoothing function controlled by that smoothes the image intensity
for a more precise search.
54
This method can result in false detection due to noise such as strong boundaries
of upper and lower eyelids since it works only on a local scale.
4.3.3.2 Hough Transform
Hough transform is a standard image analysis tool for finding curves that can be
defined in a parametrical form such as lines and circles. The circular Hough transform can be employed to deduce the radius and centre coordinates of the pupil and
iris regions.
Wildes [3], Kong and Zhang [23], Tisse et al. [17] and Ma et al. [24] have all used
Hough transform to localise irises. The localisation method, similar to Daugman s
method, is also based on the first derivative of the image. In the method proposed by
Wildes, an edge map of the image is first obtained by thresholding the magnitude of
the image intensity gradient:
where
G(x, y) =
2
0)
1 (xx0 )2 +(yy
22
e
2
2
(4.2)
(4.3)
Wildes et al. and Kong and Zhang also make use of the parabolic Hough transform to detect the eyelids by approximating the upper and lower eyelids with
parabolic arcs.
The Hough transform method requires the threshold values to be chosen for edge
detection, and this may result in critical edge points being removed, thus resulting
in failures to detect circles/arcs. In addition, Hough transform is computationally
intensive due to its brute-force approach, and thus may not be suitable for realtime applications.
4.3.3.3 Discrete Circular Active Contours
Ritter proposed an active contour model to localise iris in an image [25]. The model
detects pupil and limbus by activating and controlling the active contour using two
defined forces: internal and external forces.
The internal forces are designed to expand the contour and keep it circular. This
force model assumes that pupil and limbus are globally circular, rather than locally,
to minimise the undesired deformations due to peculiar reflections and dark patches
near the pupil boundary. The contour detection process of the model is based on
4.4
55
the equilibrium of the defined internal forces with the external forces. The external forces are obtained from the grey level intensity values of the image and are
designed to push the vertices inward.
The movement of the contour is based on the composition of the internal and
external forces over the contour vertices. Each vertex is moved between time t and
(t+1) by
Vi (t + 1) = Vi (t) + Fint,i + Fext,I
(4.4)
where Fint,i i is the internal force, Fext,i is the external force and Vi is the position of
vertex i.
A point interior to the pupil is located from a variance image and then a discrete
circular active contour (DCAC) is created with this point as its centre. The DCAC
is then moved under the influence of internal and external forces until it reaches
equilibrium, and the pupil is then localised.
4.4.1 Motivation
4.4.1.1 Edge Detector Using Wavelets
Edges in images can be mathematically defined as local singularities. Until recently,
the Fourier transform was the main mathematical tool for analysing singularities.
However, the Fourier transform is global and as such not well adapted to local singularities and it is hard to find the location and spatial distribution of singularities with
Fourier transforms. On the other hand, Wavelet transforms provide a local analysis;
they are especially suitable for time-frequency analysis [26] such as for singularity detection problems. With the growth of wavelet theory, the wavelet transforms
have been found to be remarkable mathematical tools to analyse the singularities
56
including the edges, and further, to detect them effectively. Mallat, Hwang, and
Zhong [27, 28] proved that the maxima of the wavelet transform modulus can detect
the location of the irregular structures. The wavelet transform characterises the local
regularity of signals by decomposing them into elementary building blocks that are
well localised both in space and frequency. This not only explains the underlying
mechanism of classical edge detectors, but also indicates a way of constructing optimal edge detectors under specific working conditions.
A remarkable property of the wavelet transform is its ability to characterise the
local regularity of functions. For an image f(x, y), its edges correspond to singularities of f(x, y), and thus are related to the local maxima of the wavelet transform
modulus. Therefore, the wavelet transform can be used as an effective method for
edge detection.
Assume f(x, y) is a given image of size M N. At each scale j with j>0 and
S0 f = f(x, y), the wavelet transform decomposes S j1 f into three wavelet bands: a
low-pass band S j f, a horizontal high-pass band W Hj f and a vertical high-pass band
WVj f. The three wavelet bands (S j f, W Hj f, WVj f) at scale j are of size MN, which is
the same as the original image, and all filters used at scale j (j>0) are upsampled by
a factor of 2 j compared with those at scale zero. In addition, the smoothing function
used in the construction of a wavelet reduces the effect of noise. Thus, the smoothing
step and edge detection step are combined together to achieve the optimal result.
4.4.1.2 Multiscale Edge Detection
The resolution of an image is directly related to the appropriate scale for edge detection. High resolution and a small scale will result in noisy and discontinuous edges;
low resolution and a large scale will result in undetected edges. The scale controls
the significance of edges to be shown. Edges of higher significance are more likely
to be preserved by the wavelet transform across the scales. Edges of lower significance are more likely to disappear when the scale increases.
Since an edge separates two different regions, an edge point is a point where the
local intensity of the image varies rapidly more rapidly than in the neighbouring
points which are close from the edge; such a point could therefore be characterised
as a local maximum of the gradient of the image intensity. The problem is that such
a characterisation is to be applied to differentiable images, and above all that, it also
detects all noise points. All techniques used so far to resolve the problem are based
on smoothing the image first [15, 3, 29, 30]. However, a problem with smoothing
arises: how much and what smoothing should one choose? A strong smoothing will
lead to the detection of fewer points while a lighter one will be more permissive.
That is why Mallat defined, in his work with Zhong [6], the concept of multiscale
contours. In this case, every edge point of an image is characterised by a whole chain
of the scale-space plane: the longer the chains, the more important the smoothing
imposed is, and the smaller the number of edge points we will get. In addition, this
allows us to extract useful information about the regularity of the image at the edge
point it characterises. This can be very attractive in terms of a finer characterisation
of edge map.
4.4
57
The multiscale edge detection method described in [31] is used to find the edges.
This wavelet is a nonsubsampled wavelet decomposition and essentially implements
the discretised gradient of the image at different scales. At each level of the wavelet
transform the modulus M j f of the gradients can be computed by
M j f = W jH
2
f + W jV
2
f
(4.5)
W jV
W jH
.
(4.6)
(4.7)
58
Multiscale
edge detection
Local
maxima
Iris outer
boundary edge
map
Pupil
boundary
edge map
Iris outer
circle detection
Pupil
circle
detection
Eyelids and
eyelashes isolating
Iris
normalization
0
0
0.12
5
0.37
5
0.37
5
0.12
5
0
0
0
0
2.0
2.0
0
0
4.4
59
Wh (x, y, s) =
(4.8)
(4.9)
We denote by
D the Dirac filter whose impulse response is equal to 1 at 0 and 0 otherwise.
A (H, L) the separable convolution of the rows and columns, respectively, of
the image A with the 1D filters H and L.
Gs , Hs are the discrete filters obtained by putting 2s 1 zeros between consecutive
coefficients of H and G.
s , as explained in [27] due to discretisation, the wavelet modulus maxima of
a step edge do not have the same amplitude at all scales as they should fit in
a continuous model. The constants s compensate for this discrete effect. The
values of s are given in Table 4.2.
Figure 4.3 clearly shows the application of the algorithm on an eye image where it
can be observed that the edges of the image in both horizontal and vertical directions
and at different scales are efficiently computed.
From Fig. 4.3 it can also be observed that there is significant information about
the edge in an eye image, with Wh (x, y, s) eyelids and that the horizontal pupils
lines are clearer than the outer boundary circle, and with Wv (x, y, s) useful information about both pupil and outer boundary circles. After computing the two
components of the wavelet transform, we compute the modulus at each scale as
follows:
(4.10)
M(x, y, s) = |wh (x, y, s)|2 + |wv (x, y, s)|2
The modulus M(x, y, s) has local maxima in the direction of the gradient given by
2
3
4
5
1.50
1.12
1.03
1.01
1.00
(4.11)
60
From the modulus M(x,y,s) one can see how edges across the scales change and
only real edges remaining at all the scales, for example if the intensities along specified column (see Fig. 4.4) are compared one can determine how well the edges are
detected.
A thresholding operation is then applied to the modulus M(x,y, s). This is carried
out on the modulus maxima MAX(M(x, y, s)) and then multiplied by a factor to
obtain a threshold value that yields an edge map. The threshold value T is computed
as follows:
T = MAX(M(x, y, s)).
(4.12)
4.4
61
80
70
60
50
40
30
20
10
50
100
150
200
250
300
50
100
150
200
250
300
50
100
150
200
250
300
50
100
150
200
250
300
50
100
150
200
250
300
80
70
60
50
40
30
20
10
80
70
60
50
40
30
20
10
0
100 0
90
80
70
60
50
40
30
20
10
120
100
80
60
40
20
0
0
Fig. 4.4 The first column on the left shows the modulus images M(x, y, s) for 1 s 5, and the
second column on the right displays intensities along specified column
62
n
h(xi , yi , xc , yc , r )
(4.13)
i=1
(4.13a)
For edge detection for iris boundaries the above equation will become
(xi xc )2 + (yi yc )2 r2 = 0
(4.13b)
4.4
63
64
Fig. 4.10 Unwrapping the
iris
0
r
The normalisation process involves unwrapping the iris and converting it into its
polar equivalent. It is done using Daugmans Rubber sheet model (Fig. 4.10). The
centre of the pupil is considered as the reference point and a remapping formula is
used to convert the points from the Cartesian scale to the polar scale.
The remapping of iris image I(x, y) from raw Cartesian coordinates to polar
coordinates (r, ) can be represented as
I (x(r, ), y(r, )) I (r, )
(4.14)
(4.14a)
(4.14b)
where x p (), y p () and xl (), yl () are the coordinates of the pupil and iris boundaries along the direction .
In this model a number of data points are selected along each radial line (defined
as the radial resolution). The number of radial lines going around the iris region is
defined as the angular resolution as shown in Fig. 4.11. The normalisation process
proved to be successful as demonstrated by Fig. 4.12 showing the normalised iris of
image 4.11.
4.4
65
66
In the proposed algorithm the threshold value is selected by computing the maximum of the modulus at a given scale s which provides a solid criterion, because the
sharp variation points of the image smoothed by h(x, y, s) are the pixels at locations
(x, y), where the modulus M(x, y, s) has a local maxima in the direction of the gradient A(x, y, s) [31]. It can be clearly seen from Fig. 4.15 that edges are well detected
and the pupil is clearer as shown in (b) and (c) than the edge and pupil as shown in
(a). It can also be seen that, as a result, the pupils circle is well localised as shown
in (e). This is the reason why the proposed algorithm outperforms other algorithms
which used a local scale and canny edge detector.
This analysis confirms and explains the effectiveness of our proposed method
based on multiscale edge detection using wavelet maxima for iris segmentation,
provides a precise detection of circles (iris outer boundary and pupil boundary)
and obtains a precise edge map from the wavelet decomposition in the horizontal
and vertical directions. This in turn greatly reduces the search space for the Hough
transform and performs well in the presence of noise, thereby improving the overall
performance with a better success rate than that of Daugman and Wildes methods
(Fig. 4.16).
Fig. 4.15 Edge influence in iris segmentation: (a) pupil edge map using Canny edge detector
and threshold value (T1 = 0.25 and T2 = 0.25), (b) and (c) pupil edge obtained with a multiscale
edge detection using wavelet maxima for = 0.4 and = 0.6, (d) result of iris segmentation
using Canny edge detector of example (a), (e) result of iris segmentation using a multiscale edge
detection of example (c)
4.5
67
100
99
98
Success rate
97
96
95
94
93
Daugman's method
Proposed method
Wildes' method
92
91
10
20
30
Noise
40
50
60
68
1 x2
y2
1
exp[ ( 2 + 2 ) + 2j W x]
2 x y
2 x
y
(4.15)
1 (u W )2
v2
G(u, v) = exp[ (
+
)]
2
u2
v2
(4.16)
where u = 1/2 x and v = 1/2 y . Gabor functions can form a complete but
non-orthogonal basis set. Expanding a signal using this basis provides a localised
0.12
0.1
0.08
0.06
0.04
0.02
0
-0.02
-0.04
-0.06
-0.08
20
40
60
80
100
Fig. 4.17 Wavelet maxima vertical component at scale 2 with intensities along specified column
0.08
0.06
0.04
0.02
0
-0.02
-0.04
-0.06
-0.08
-0.1
10
20
30
40
50
60
70
80
90
100
Fig. 4.18 Wavelet maxima horizontal component at scale 2 with intensities along specified column
4.5
69
(4.17)
x = a m (x cos + y sin ),
(4.17a)
y = a m (x sin + y cos ),
(4.17b)
u =
(4.18)
(a 1)Uh
(a + 1) 2 ln 2
(4.19)
0.6
0.5
0.4
0.3
0.2
0.1
0
0.1
0.2
0.6
0.4
0.2
0.2
0.4
0.6
Fig. 4.19 Gabor filter dictionary, the filter parameters used are Uh = 0.4, Ul = 0.05,K = 6
and S = 4
70
v = tan(
2
(2 ln 2)2 u2 1
)[Uh 2 ln( u )][2 ln 2
]2
2k
Uh
Uh2
(4.20)
Normalized iris
4.6
Matching
71
(4.21)
Moments Invariant
The theory of moments provides an interesting series expansion for representing
objects. This is also suitable to mapping the filtered images to vectors so that their
similarity distance can be measured [33].
Certain operations of moments are invariant to geometric transformations such
as translations, rotations and scaling. Such features are useful in the identification of
objects with unique signatures regardless of their location, size and orientation [33].
A set of seven 2D moments invariant that are insensitive to translations, rotations
and scaling have been computed for each image analysed by the horizontal and vertical components wavelet maxima and Gabor filters. This has produced 240 filtered
images for each image. Therefore, for seven moments a feature vector of size 1680
(2407) elements is constructed.
4.6 Matching
It is very important to present the obtained vector in a binary code because it is
easier to compute the difference between two binary code-words than between two
number vectors since Boolean vectors are always easier to compare and manipulate.
A Hamming distance matching algorithm for the recognition of two samples has
been employed. It is basically an Exclusive OR (XOR) operation between two bit
patterns. Hamming distance is a measure, which delineates the differences, of iris
codes. Every bit of a presented iris code is compared to the every bit of a referenced
iris code: if the two bits are the same (i.e. two 1s or two 0s) the system assigns
a value 0 to that comparison while a value of 1 is assigned if the two bits are
different. The formula for iris matching is therefore as follows:
72
HD =
1
Pi Ri
N
(4.22)
Tz
Tb
100
(4.23)
where Tz is the total number of zeros calculated by Hamming distance vector and
Tb is the total number of bits in iris template.
4.7.1 Database
The Chinese Academy of Sciences Institute of Automation (CASIA) eye image
database [34] containing 756 greyscale eye images with 108 unique eyes or classes
and 7 different images of each unique eye has been used in the analysis. Images from
each class are taken from two sessions with 1 month interval between the sessions.
The images were captured especially for iris recognition research using specialised
digital optics developed by the National Laboratory of Pattern Recognition, China.
The eye images are mainly from persons of Asian decent, whose eyes are characterised by irises that are densely pigmented, and with dark eyelashes.
4.7
73
some types of texture if some conditions are not met. A solution is therefore required
to combine a number of techniques to balance this problem with a view to analyse
all types of texture.
Our proposed approach has demonstrated that a combined multiscale technique
is effective and robust for the analysis of iris texture and a high system performance
can be achieved (Table 4.4).
Statistical features
Moment invariants
480
99.60
1680
99.52
74
Daugman
LiMa and Tan
Boles et boashashe
Proposed method (statistical features)
Proposed method (moments invariant)
99.90
99.23
93.2
99.52
99.60
local region must be small enough, which results in a high dimensionality of the feature vector (2048 components). This means that Daugmans method captures much
more information in the much smaller local regions. This makes his method slightly
better than ours. Boles [14] and Li Ma et al. [21] used a kind of 1D ordinal measures
thereby losing much information when compared with 2D ordinal representations.
This directly leads to a worsening of their performances when compared with our
method. Boles and Boashash [14] only employed extremely little information along
a virtual circle on the iris to represent the whole iris, which in turn, results in a
relatively low accuracy. Li Ma et al.s [21] method uses local features, so that the
performance may be affected by iris localisation, noise and inherent iris deformations caused by pupil movements.
Table 4.5 depicts the computational cost involved for the computation of the
feature extraction using the methods described in [4, 14, 21] including our proposed
algorithm. These experiments are carried out using Matlab 7.0. Since Boles method
[14] is based on 1D signal analysis, the computational cost incurred is smaller than
that of the other methods. However, our proposed approach is faster than both Daugman and Li Ma and Tans methods because it employs a compact feature vector
representation with a high recognition rate.
Table 4.5 Comparison of the computational complexity
Methods
Daugman
Li Ma
and Tan
Feature
extraction
complexity
(ms)
285
95
Boles and
Boashash
Proposed
(statistical
features)
Proposed
(moment
invariants)
55
74
81
References
75
wavelet maxima to define pupil and iris edges. This in turn greatly reduces the
search space for the Hough transform, thereby improving the overall performance.
A combination of Gabor filters with wavelet maxima components provide more
textured information, since wavelet maxima allow us to efficiently detect horizontal and vertical details through scale variations. By applying Gabor filters
to the resulting components with varying orientations and scales more precise
information can be captured for use in iris recognition accuracy.
Moment invariants are useful and are efficient to capture iris features since they
are insensitive to affine transformations (i.e. translations, rotations and scaling)
thereby providing a complete and compact feature vector which can improve the
matching process quickly.
The experimental results also show that our proposed method is reasonable and
promising for the analysis of iris texture. Future work will include:
Analysis of local variations to precisely capture local fine changes of the iris with
a view to further improve the accuracy.
Analysis of a combined local and global texture analysis for robust iris
recognition.
4.9 Conclusion
Iris recognition, as a biometric technology, has great potential for security and identification applications. This is mainly due to its variability and stability features. This
chapter has discussed an iris localisation method based on a multiscale edge detection approach using wavelet maxima as a preprocessing step which is highly suitable
for the detection of iris outer and inner circles. This approach yields attractive iris
localisation a necessary step to achieve higher recognition accuracy. The chapter
has also introduced a novel and efficient multiscale approach for iris recognition
based on a combined feature extraction methods that consider both the textural and
topological features of an iris image. These features being invariant to translations,
rotations and scaling yield a superior performance in terms of recognition accuracy
and computational cost when compared against the algorithms proposed by Boles
[14] and Li Ma et al. [21]. However, it performs with a marginally less accuracy, but
with a lower complexity when compared against Daugmans method [4].
References
1. M. K. Khan, J. Zhang and S. J. Horng, An effective iris recognition system for identification
of humans, INMIC Multitopic Conference, pp. 114117, 2426 December 2004.
2. J. Wayman, A. Jain, D. Maltoni and D. Maio, Biometric systems, Technology, Design and
Performance Evaluation, Springer-Verlag London, UK, 2005.
3. R. Wildes, Iris recognition: an emerging biometric technology, Proceedings of the IEEE,
vol. 85, pp. 13481363, 1997.
76
4. J. Daugman, High confidence visual recognition of persons by a test of statistical independence, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 15,
pp. 11481161, 1993.
5. A. Muron and J. Pospisil, The human iris structure and its usages, Physica, vol. 39,
pp. 8795, 2000.
6. P. C. Kronfeld, The gross and embryology of the eye, The Eye, vol. 1, pp. 166, 1968.
7. L. Flom and A. Safir, Iris Recognition System, U.S. Patent 4 641 394, 1987.
8. R. Wildes, J. Asmuth, G. Green, S. Hsu, R. Kolczynski, J. Matey and S. McBride, A machinevision system for iris recognition, Machine Vision and Applications, vol. 9, pp. 18, 1996.
9. R. Johnson, Can Iris Patterns be Used to Identify People? Chemical and Laser Sciences
Division LA-12 331-PR, Los Alamos National Laboratory, Los Alamos, CA, 1991.
10. K. Bae, S. Noh and J. Kim, Iris feature extraction using independent component analysis,
Proceedings of the 4th International Conference Audio- and Video-Based Biometric Person
Authentication, pp. 838844, 2003.
11. J. Daugman, Biometric Personal Identification System Based on Iris Analysis, U.S. Patent
5 291 560, 1994.
12. J. Daugman, Demodulation by complex-valued wavelets for stochastic pattern recognition, International Journal of Wavelets, Multiresolution and Information Processing, vol. 1,
pp.1 17, 2003.
13. R. Sanchez-Reillo and C. Sanchez-Avila, Iris recognition with low template size, Proceedings of the International Conference Audio and Video-Based Biometric Person Authentication, pp. 324329, 2001.
14. W. Boles and B. Boashash, A human identification technique using images of the iris
and wavelet transform, IEEE Transactions on Signal Processing, vol. 46, pp. 11851188,
1998.
15. J. Daugman, How iris recognition works, IEEE Transactions on Circuits and Systems for
Video Technology, vol. 14, pp. 2130, 2004.
16. S. Lim, K. Lee, O. Byeon and T. Kim, Efficient iris recognition through improvement of
feature vector and classifier, ETRI Journal, vol. 23, no. 2, pp. 170, 2001.
17. C. Tisse, L. Martin, L. Torres and M. Robert, Person identification technique using human
iris recognition, Proceedings of the Vision Interface, pp. 294299, 2002.
18. T. Tangsukson and J. Havlicek, AM-FM image segmentation, Proceedings of the IEEE
International Conference Image Processing, pp. 104107, 2000.
19. J. Havlicek, D. Harding and A. Bovik, The mutli-component AM-FM image representation,
IEEE Transactions on Image Processing, vol. 5, pp. 10941100, June 1996.
20. B. Kumar, C. Xie and J. Thornton, Iris verification using correlation filters, Proceedings 4th International Conference Audio- and Video-Based Biometric Person
Authentication, pp.697705, 2003.
21. L. Ma, T. Tan, et al. Efficient iris recognition by characterizing key local variations, IEEE
Transactions on Image Processing, vol. 13, pp. 739750, 2004.
22. J. G Daugman, The importance of being random: statistical principles of iris recognition, In
Pattern Recognition, vol. 36, no. 2, pp. 279291, 2003.
23. W. Kong and D. Zhang, Accurate iris segmentation based on novel reflection and eyelash
detection model, Proceedings of 2001 International Symposium on Intelligent Multi-media,
Video and Speech Processing, Hong Kong, 2001.
24. L. Ma, Y. Wang and T. Tan, Iris Recognition Using Circular Symmetric Filters, National
Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences,
Beijing, 2002.
25. N. J. Ritter and J. R. Cooper, Locating the Iris: A First Step to Registration and Identification, Proceedings of the 9th IASTED International Conference on Signal and Image Processing, IASTED, pp. 507512, August 2003.
26. J. C. Goswami and A. K. Chan, Fundamentals of Wavelets: Theory, Algorithms, and Applications, John Wiley & Sons, New York 1999.
References
77
27. S. Mallat and S. Zhong, Characterization of signals from multiscale edges, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 14, pp. 710732, 1992.
28. S. Mallat and W. Hwang, Singularity detection and processing with wavelets, IEEE Transactions on Information Theory, vol. 38, pp. 617643, 1992.
29. L. Ma, T. Tan, Y. Wang and D. Zhang, Personal identification based on iris texture analysis,
IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, pp. 15191533,
2003.
30. L. Pan and M. Xie, Research on iris image preprocessing algorithm, IEEE International
Symposium on Machine Learning and Cybernetics, vol. 8, pp. 52205224, 2005.
31. S. Mallat, A Wavelet Tour of Signal Processing, Second Edition, Academic Press, New
York, 1998.
32. B. S. Manjunath and W. Y. Ma, Texture features for browsing and retrieval of image data,
IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 18, no. 8, August 1996.
33. A. K. Jain. Fundamentals of Digital Image Processing, Prentice-Hall Inc., Upper Saddle
River, 1989.
34. Chinese Academy of Sciences Institute of Automation. Database of 756 Greyscale Eye
Images. http://www.sinobiometrics.com Version 1.0, 2003.
35. C. Sanchez-Avila and R. Sanchez-Reillo, Iris-based biometric recognition using dyadic
wavelet transform, IEEE Aerospace and Electronic Systems Magazine, vol. 17, pp. 36, Oct.
2002.
36. M. Nabti and A. Bouridane, An improved iris recognition system using feature extraction based on wavelet maxima moment invariants, Advances in Biometrics, Springer
Berlin/Heidelberg, vol. 462, pp. 988996, 2007.
37. M. Nabti and A. Bouridane, An effective and fast iris recognition system based on a combined
multiscale feature extraction technique, Pattern Recognition, vol. 41, pp. 868879, 2008.
Chapter 5
5.1 Introduction
The use of wavelets in digital watermarking has increased dramatically over the last
decade replacing previously popular domains such as the Discrete Cosine Transform
(DCT) and the Discrete Fourier Transform (DFT). The main reason for this relates
to several advantages which wavelets offer over these domains, such as better energy
compaction and efficiency of computation. The Discrete Wavelet Transform (DWT)
however suffers from some disadvantages, it lacks directional selectivity so it cannot
differentiate between opposing diagonals. It also lacks shift invariance meaning that
small geometrical changes in the input signal can cause large shifts in the wavelet
coefficients. To overcome these shortcomings complex wavelets have been developed. This chapter describes two complex wavelet transform implementations and
their properties. The benefit of these properties to watermarking is also detailed.
Watermarking schemes can be roughly categorised into two main methodologies;
spread spectrum and quantisation-based schemes. Balado terms these as interference
non-rejecting and interference rejecting schemes, respectively [1]. Spread transform
has been developed as a combination between these two methodologies, spreading
the quantisation over multiple host samples through the use of a vector projection.
Spread transform embedding therefore combines the robustness gained from using
multiple host samples with the host interference rejecting nature of quantisationbased schemes allowing for higher levels of capacity to be reached.
Further as watermarking has matured as a subject area an urgent need has arisen
to objectively find the absolute performance limits of watermarking systems. To
this end the process of deriving the capacity of watermarking algorithms has been
developed by Moulin [23]. Through statistical modelling of wavelet coefficients and
the application of information and game theory it is possible to derive an estimate of
the maximum achievable performance of any given watermarking system and host
data.
This chapter first introduces the concept of spread transform watermarking and
then applies this algorithm and information theoretic capacity analysis to the case
of watermarking with complex wavelets. This will demonstrate the improved levels
of capacity that can be achieved through the superior feature representation offered
by complex wavelet transforms.
A. Bouridane, Imaging for Forensics and Security, Signals and Communication
Technology, DOI 10.1007/978-0-387-09532-5 5,
C Springer Science+Business Media, LLC 2009
79
80
h0(n)
h0(n)
h1(n)
h1(n)
2
h1(n)
h0(n)
5.2
Wavelet Transforms
81
of the wavelet transform, respectively. The two DWTs act in parallel on the same
data, one DWT acts upon the even samples of the data while the other acts upon the
odd samples. The difference and sum of these two DWT decompositions are then
taken to produce the two trees of the dual tree wavelet transform (DTWT).
If the two DWTs used are the same then no advantage is gained. However if the
DWTs are designed so as to be an approximate Hilbert transform of each other then
it is possible to obtain a directionally selective complex wavelet transform (Fig. 5.4).
This process is demonstrated in Fig. 5.3, with the scaling (h0 ) and wavelet (h1 ) filters
of the upper DWT and the scaling (g0 ) and wavelet (g1 ) filter of the lower DWT
applied recursively on their respective low-pass outputs at each level. The sum and
the difference of the high-pass subbands produced at each level are then calculated
to obtain the dual tree coefficients of the dual tree wavelet transform.
The application of the transform to 2D data follows the same methodology as
that of the DWT. Although the complex version has the advantage of excellent shift
invariance, this comes at the cost of 4:1 redundancy for 2D signals. This is due
to the use of four DWTs acting in parallel in the case of 2D data leading to 12
82
Fig. 5.4 DTWT wavelet, real (blue solid) and imaginary (red dashed)
different subbands at each level of decomposition. This places restrictions upon the
embedding algorithm as the watermark in the wavelet domain must have a valid representation in the spatial domain. As a result of the redundancy, much of the power
added to wavelet coefficients will be in the softy space of the wavelet transform and
will be lost upon re-composition. For this reason in this thesis the lower redundancy
version of the dual tree complex wavelets transform developed by Selesnick et al.
[28] is used instead. This uses only two DWTs acting in parallel for 2D data and so
has a much more manageable redundancy of 2:1 (Fig. 5.5) for 2D signals allowing
for more freedom when embedding. This decreased redundancy also makes it an
attractive option for use in compression [14].
The DTWT overcomes the problem of the DWT lacking directional selectivity. The DTWT can discriminate between opposing diagonals with six different
5.2
Wavelet Transforms
83
84
Fig. 5.7 NCWTR and NCWTC filterbanks for real and complex inputs, respectively
There are two filterbanks NCWTR and NCWTC and both consist of a real scaling
filter (h0) and two complex wavelet filters (h+ and h). The NCWTR and NCWTC
are applied to real and complex inputs, respectively (Fig. 5.7). The complex filters
h+ and h (Fig. 5.9) when applied to real input will produce wavelet coefficients
that are complex conjugates of each other, as a result one set of these complex
coefficients can be discarded. In the case of the NCWTR the output consists of one
real part and two complex outputs. As a result of their being conjugates of each
other one of these can be discarded. In the case of the NCWTC the output consists
of three complex outputs. The two complex outputs in this case are unique and both
must be kept.
The NCWTR is first applied to the real value rows of the image to be decomposed. This results in the creation of one subband of real value columns and two
subbands of complex value columns. The complex value columns are conjugates
of each other and so one can be discarded as redundant. An input of N coefficients
will thus produce 5N/3 coefficients (N/3 real and 2N/3 complex coefficients). However after the discarding of one of the complex subbands N coefficients will remain,
hence the NCWTR is non-redundant. The NCWTR is then applied to the real-valued
columns to produce one real and two complex-valued outputs. Again, one of these
complex outputs can be discarded as a conjugate leaving the real-valued LL band
and a complex value subband consisting of the horizontal features of the image.
The NCWTC is applied to the complex value rows to create three complex
value outputs consisting of the vertical and opposing diagonal features of the
image, respectively. Due to down-sampling by three the storage space required
for the three complex subbands will be the same as the original complex subband and so the NCWTC is non-redundant. The 2D NRCWT decomposition is
illustrated in Fig. 5.8.
The process is repeated on the LL band at each level to produce one real-valued
subband and four complex-valued subbands at each level of decomposition. The
subbands produced are orientated at 0 , 90 , 45 and 45 , in both real and imaginary subbands. While this offers less directional subbands than the DTWT, the
NRCWT maintains the directional selectivity of the DTWT with regard to diagonal
features (Fig. 5.11). However unlike the DTWT the transform will produce as many
coefficients as there are pixels in the original image and is therefore non-redundant
(Fig. 5.10). As a result there will be no loss of information in the wavelet coefficients
upon re-composition.
5.2
Wavelet Transforms
Fig. 5.9 NRCWT wavelet, real (solid blue) and imaginary (dashed red)
85
86
In addition the NRCWT coefficients have a high degree of phase coherency. This
means that the phase of the coefficients is coherent in places where the coefficients
have strong directional tendency.
5.3
Visual Models
87
of visual tests to derive JND values directly from the coefficients of the wavelet
decomposition. A combination of both these methods is also considered.
(5.1)
f 1 (bg (x, y), m g (x, y)) = m g (x, y)(bg (x, y)) + (bg (x, y))
bg (x, y) 1/2
f 2 (bg (x, y)) = T0 . 1
+ 3; f or bg (x, y) 127
127
f 2 (bg (x, y)) = . bg (x, y) 127 + 3; f or bg (x, y) > 127
(5.2)
(5.3)
(5.4)
(5.5)
Through visual experiments Chou derived T0 ; and were found to be 17, 3/128
and 1/2, respectively. The values bg (x, y) and mg (x, y) are the average background
luminance and luminance contrast around the pixel at (x, y), respectively. They are
obtained using the following filters:
0 0 0 0 0
0
1 3 8 3 1
0
G1 =
0 0 0 0 0 G 3 = 1
1 3 8 3 1
0
0 0 0 0 0
0
0
0
1
0
0
0
0
0
8
3
0
0
3
0 3 1
G2 =
1
G4 = 0
0
0
0 3 8
0
0
0 1
0
0
0
0
0
3
8
0
1
3
8
3
1
1
3
0
3
1
0
0
0
0
0
1
3
8
3
1
0
8
3
0
0
0
0
0
0
0
0
0
0
(5.6)
88
1
p(x 3 + i, y 3 + j).G k (i, j)
16 i=1 j=1
5
(5.7)
1
1
B(i, j) =
1
1
1
5
5
1
bg (x, y) =
p(x
32 i=1 j=1
1
2
2
2
1
1
2
0
2
1
1
2
2
2
1
1
1
1
1
3 + i, y 3 + j).B(i, j)
(5.8)
JNDq2 (x, y) =
3
3
(5.9)
i=0 j=0
for q = 0,1,. . .,15 and 0 <= x <= N/4, 0 <= y <= N/4.
JNDq (x, y) represents the JND value at position (x, y) of the qth subband. The
factor q is calculated as follows:
q =
Sq .
1
15
Sk1
(5.10)
k=0
Sk denotes the average sensitivity of the HVS to distortions in the kth subband.
It is calculated as
16
Sk =
N .N
(k +1)h1
( pk +1)w1
u=k .h
(u, v)
for k = 0, 1, . . . ., 15
(5.11)
v= pk .w
(u, v)denotes the response curve of the modulation transfer function (MTF)
for 0<=u<=N, 0<=v<=N. Chou [22] proposes the following formula for its
calculation:
(u, v) c
(u, v)
. exp
(5.12)
(u, v) = a. b +
0
0
where
(u, v) =
32v
N
+
24u
N
2 1/2
(5.13)
5.3
Visual Models
89
is derived from the MTF curve modelled by a=2.6, b=0.0192, c=1.1 and
0 =8.772.
The model as originally proposed by Chou has a linear subband structure that is
not suitable for the subband structure of the discrete and complex wavelet transforms
(Fig. 5.12).
This linear subband structure must first be altered to fit the multi-resolution
nature of wavelet decomposition subbands that vary in size according to level of
decomposition. Ghouti and Bouridane [18] propose the subband structure shown in
Fig. 5.13 for the balanced multi-wavelet transform.
Due to the similarity of the DWT and BMW wavelet decompositions this subband decomposition is also used for DWT embedding. The linear subbands are
resized through a variation of Eq. 5.9 given by Eq. 5.14. However this decomposition is not applicable to the subbands produced by the complex wavelets and so
different channel decompositions are proposed.
JNDq2 (x,
t
t
2
1 2
1
y) =
JND 2f b (i
+ x.2 , j + y.2 ) .q
t
i=0 j=0
for q = 0,1,. . .,15 and 0 <= x <= N/2t , 0 <= y <= N/2t
(5.14)
90
and
t = 5 p1
, if
0 < p 15
3
t = 5, i f
p=0
For the DTWT each section of the dual tree decomposition is treated as being
of the same channel subband. The same subband weight is applied to opposing
halves of the dual tree composition as they are of opposing orientations and the
same frequency and so can be treated identically in this respect (Fig. 5.14).
Besides, to take into account the improved directionality of the DTWT a different
set of filters are used to obtain the value of m with G1 and G2 orientated at 15 and
+ 15 , respectively and G5 and G6 orientated at 75 and +75 , respectively. G3 and
G4 are the same diagonally orientated filters used for the case of the DWT.
0
1
G1 =
0
0
0
0
0
G3 =
1
0
0
0
3
0
0
0
1
3
0
3
1
0
0
0
3
0
0
8
3
0
0
1
3
0
3
1
0
0
3
8
0
0
0
G5 =
1
0
0
0
0
3
3
1
0
0
0
0
0
0
0
0
1
0
G2 = 0
1
0
0
0
0
0
1
G 4 = 1
0
0
0
0
1 0
0
0
3 0
3 1
G 6 = 1
0
0 0
0 0
0
0
0
0
3
0
1
3
0
3
1
0
3
0
0
0
0
0
3
8
0
1
3
0
3
1
0
8
3
0
0
1
3
3
0
0
0
0
0
0
0
0
0
3
3
1
0
1
0
0
0
0
0
0
0
0
0
0
5.3
Visual Models
91
max
k=1,2,3,4,5,6
(5.15)
As the NRCWT down-samples by three at each level and has only four subbands at each level the decreased number of subbands must be taken into account.
Imaginary and real parts of each subband are considered as being in the same
channel leading to four different channels at each level and one low-pass subband
for a total of 13 channels. Therefore the subband structure shown in Fig. 5.15 is
used.
Equation 5.16 is then altered to take into account the reduction in the number of
channels in the subband decomposition and the down-sampling by three instead of
two at each level
t
t
3
1 3
1
JNDq2 (x, y) =
i=0 j=0
for q = 0,1,. . .,12 and 0 <= x <= N/3t , 0 <= y <= N/3t
and
t = 3 p1
, if
0 < p 12
4
t = 3, i f
p=0
(5.16)
92
Fig. 5.16 Chous JND profile (top) and Loos JND profile (bottom) for DTWT of Lena image
The factor q is also calculated to take into account the reduction in channels
from 15 to 12.
q =
Sq .
12
1
Sk1
(5.17)
k=0
The main disadvantage of Chous model is that it is less capable at adapting the
watermark to complex textured areas of the image due to the simplicity of the filters
employed by Chou. A visual description of both models is shown in Fig. 5.16.
5.3
Visual Models
93
(5.18)
where k and C are subband-dependent constants, dependent on the level l and orientation , respectively. The value x is the absolute mean value of a 33 Gaussian
window of standard deviation 0.5 centred round the coefficient at position (u, v).
B is a measure of the spatial brightness corresponding to the coefficient at position
(u, v).
In regions with a lot of texture the term k2 x(u, v)2 dominates the equation substantially increasing the value of the corresponding JND value. In the absence of
texture the JND will be dependent upon the term C, which decreases sharply when
level of decomposition increases. The factor B is a measure of the local brightness
and is calculated as detailed in Eqs. 5.19, 5.20 and 5.21 for the DWT, DTWT and
NRCWT, respectively where y represents the value of the level 5 low-pass coefficient corresponding to position (u, v) normalised to fall within the range [0,1].
The rest of the equations are approximated using quadratic regression based upon
measurement of the visibility of watermark noise at different levels of background
brightness.
B = 2.12(y(u, v) 0.56)2 + 1
(5.19)
B = 2.03(y(u, v) 0.53)2 + 1
(5.20)
B = 2.43(y(u, v) 0.55)2 + 1
(5.21)
The visual tests were conducted by setting all subbands of an image to 0. Then all
the values in the subband under consideration were set randomly distributed in the
range [0, n]. The image was then recomposed and added to a sine wave grating of
the appropriate frequency and orientation. The value n was then increased uniformly
until the distortion became visible. Using the value of n and the average value of the
coefficients composing the sine wave grating, an estimate of k for each level and
orientation was derived. The tests were repeated with different amplitudes of sine
wave gratings to obtain varied results for multiple values of x. The results for the
DWT, DTWT and NRCWT are shown in Table 5.1. All visual tests were conducted
with a gamma correction value of 2.1, a resolution of 32 pixels/cm and a viewing
distance of 30 cm. Three subjects took part in the tests and the results obtained from
each averaged to get the final factors.
94
DWT-k
DWT-C
DTDWT-k
DTDWT-C
NRCWT-k
NRCWT-C
Level 1 Diag.
Level 1 Hor./Ver.
Level 2 Diag.
Level 2 Hor./Ver.
Level 3 Diag.
Level 3 Hor./Ver.
Level 4 Diag.
Level 4 Hor./Ver.
Level 5 Diag.
Level 5 Hor./Ver.
0.33
0.20
0.25
0.14
0.25
0.11
0.18
0.11
0.18
0.11
5
3
4
2
1
1
1
1
1
1
1.00
0.60
0.50
0.25
0.25
0.21
0.20
0.195
0.195
0.19
6
4.5
1.5
1
1
1
1
1
1
1
0.25
0.16
0.15
0.145
0.145
0.14
3
2
2
1
1
1
However it also suffers from a spreading effect around finer details such as
edges which can increase watermark visibility around these features. An additional
drawback is that a separate set of visual tests must be conducted for each individual
wavelet transform used.
(5.22)
This Hybrid model combines both Chous models precise approximation of feature edges combined with Loos models excellent approximation of textured image
regions. However it comes at the computational cost of having to calculate both JND
models.
2
1
log2 1 + w2
2
v
(5.23)
5.4
95
where 2 w is the variance of the watermark and 2 v is the variance of the attack. The
statistics of the host therefore have no effect on the capacity in this case. However
in blind scenarios where knowledge of the host data is not available the capacity
is limited by interference from the original host data. As in most watermarking
scenarios the encoder will have knowledge of the host, the watermarking scenario
can be viewed as a communications problem with side information at the encoder.
In his landmark paper Costa [9] likened the problem of communication across
a noisy channel in the face of host interference to writing on dirty paper. It was
demonstrated that if the encoder has knowledge of the host channel then the capacity
of a blind communication scenario can be independent of the host interference, and
does not depend on whether the decoder has access to knowledge of the original
host channel. Eggers and Girod [12] extend Costas scenario to watermarking by
modelling the watermarking process as communication over a noisy channel with
side information at the encoder.
The result obtained by Costa can be adapted to the case of watermarking as illustrated in Fig. 5.17. The host data x and the distortion v are assumed to be independent and identically distributed (iid) and Gaussian distributed, i.e. xN(0, 2 x ) and
vN(0, 2 v ), respectively, and of length N. The message to be encoded m is taken
from the alphabet M. The algorithm proceeds as follows:
code book U
x
U1
U2
.....
UM
w
Search in
Um
z
+
m'
Search in
U
96
3. At the decoder stage a search is conducted for the sequence u that is orthogonal
to the sequence received after the attack has been applied, z. The index m of the
sub-codebook in which this sequence is found then determines the message that
has been transmitted.
Using this method it can be shown that the capacity of the blind watermarking
system is equal to Eq. 5.23 and so the capacity is not limited by the absence of
knowledge of x at the decoder stage. However Costas scheme is not realisable in
practise as the codebook U tends to become extremely large even for moderate data
lengths N and alphabets M.
(5.24)
The watermark w is then the quantisation error from applying the appropriate
scalar quantiser which can be calculated as in Eq. 5.25.
w = (Q b (x) x)
(5.25)
The estimate of the decoded bit b, can then be obtained through the use of a minimum Euclidean distance decoder using the same scalar quantiser and the received
data y.
b = arg min y Q b (y)2
(5.26)
0,1
5.4
97
Eggers and Girod then further extended QIM to create what they termed the
Scalar Costa Scheme (SCS) [11]. This involves the use of a distortion compensation
factor . Chen and Wornell [7] added a similar extension to their scheme terming it
distortion compensated QIM (DC-QIM). This increases the size of the quantisation
bins used in return for a decrease in the accuracy of the quantisation and is best
used for high levels of attack. Once quantised with the expanded quantisation bin
the extra distortion is compensated for by adding part of the quantisation back to the
host data x which results in the same overall distortion as the case where =1.
x = q(x; /) + (1 )[x q(x; /)]
(5.27)
This sub-optimal implementation of Costas scheme QIM has the same advantages as Costas approach in that its performance is independent of the interference
from the host data.
r
n=1
x n tn
(5.28)
The vector projection is then quantised in a similar fashion to that of the unidimensional QIM case with the host samples being quantised in the direction of the
vector v. The addition of the watermark vector uw creates the watermarked vector uy
which lies in the appropriate quantisation bin for the bit to be embedded.
u y = u x + uw
(5.29)
98
Fig. 5.19 Quantisation Index Modulation and Spread Transform encoding of symbols X and O in
2 host samples for =2 and Spread Transform vector t
u w = Q b (u x ) u x
r
n=1
yn tn =
r
n=1
x n tn +
(5.30)
r
n=1
w n tn
(5.31)
5.5
Proposed Algorithm
99
m = {m1,m2,.....,mL}
I
Forward
WT
Calculate
JND profile
Calculate
Watermark
Vectors
K
w'
w
x
Calculate
JND profile
y
Inverse
WT
Attack
channel
J'
Forward
WT
' Calculate
Watermark
Vectors
Dequantization
m'={m1',m2',.....,mL'}
(5.32)
Individual elements of the watermark vector are then scaled by the size of their
corresponding JND value () as shown in the following equation.
100
wn =
wn
n
r
n
n=1
This novel method of perceptually shaping the watermark vector ensures that
coefficients with higher JND values will contribute more towards the vector projection, thereby visually masking the introduced watermark distortion. Finally,
the watermark vector elements are added to those of the host data to create the
watermarked data y.
yn = xn + wn
8. The watermarked wavelet subbands are then inverse transformed to give the
watermarked image in the spatial domain.
5.6
101
(5.35)
where WNR1 is the WNR when using only one sample and WNRr is the WNR
when using r samples. It should be noted that increasing the size of the spread
vector r comes at the cost of decreasing the maximum number of bits that can be
embedded. For example with r = 2 the maximum capacity possible is limited to 0.5
bits/elements. The capacity when faced with additive white Gaussian noise (AWGN)
C AWGN of spread transform (ST) data hiding can be calculated from the capacity of
embedding without spread transform (QIM, r = 1) as follows:
AW G N
=
C ST,
r
AW G N
(WNR + 10 log10 r )
C ST,1
(5.36)
(5.37)
where y is the data received by the decoder, d is the alphabet of messages that may
be embedded equal to 0 or 1 for binary data embedding and I is the mutual information. The optimum value of the distortion compensation factor alpha is applied.
The solution to (5.37) is obtained through a comparison of the PDFs of the transmitted data assuming different alphabet values d were transmitted. This solution to
obtaining the mutual information is given by [10] as
I (y; d) = h(y) h(y|d)
(5.38)
The PDFs used in the calculation are illustrated in Fig. 5.21. Finally the power
of the watermark distortion introduced by quantisation is calculated by [7] as
2
E q2 =
12
(5.39)
102
d=0
d=1
Fig. 5.21 PDFs of data before and after attack is applied, = 7, v = 1 for two different possible
transmitted watermark values 0 and 1
5.6
Fig. 5.22 EQ 256 parallel Gaussian channels for DTWT decomposition of Lena image
Fig. 5.23 EQ 256 parallel Gaussian channels for NRCWT decomposition of Lena image
103
104
Fig. 5.24 EQ 256 parallel Gaussian channels for DTWT decomposition of Baboon image
Fig. 5.25 EQ 256 parallel Gaussian channels for NRCWT decomposition of Baboon image
5.6
105
Each channel is assumed to be iid and Gaussian with zero mean and variance
2 k . Each channel has an inverse sub-sampling rate R k . For all transforms channels
are critically sampled so that
K
Rk = 1
(5.40)
k=1
Smoother less detailed images like Lena will tend to have lower rates for higher
power channels with most of their coefficients being concentrated in lower energy
channels particularly at high frequency levels. However more complex textured
images like Baboon will tend to have high power channels consisting of many
coefficients. For all images the higher energy channels will tend to be concentrated
at lower frequency levels, and in textured areas of the image where the coefficients
are larger due to the concentration of image content in these regions. As will be seen
in the next section these channels tend to offer the highest capacities.
(5.42)
ak = E v 2
(5.43)
where x is the host data vector, w the watermark vector and v the attack vector. The
power of the Gaussian channels pk and the number of coefficients they contain is
shown in Figs. 5.26 and 5.27. Note that more textured images such as Baboon and
Barbara will tend to have more high power channels as the textured regions are
represented by large coefficients in the wavelet decomposition. Smoother images
such as Lena and Pepper will tend to have less detail and so smaller coefficients
when decomposed.
106
Fig. 5.26 EQ 256 parallel Gaussian channels for Lena and Peppers image for DWT (solid), DTWT
(dotted) and NRCWT (dashed)
Fig. 5.27 EQ 256 parallel Gaussian channels for Barbara and Baboon image for DWT (solid),
DTWT (dotted) and NRCWT ( dashed)
Also for all images the NRCWT decompositions tend to have more high power
channels, this is due to the ability of the NRCWT to well represent image features,
especially textured regions. The same trend can be seen for the DTWT which will
also have higher power channels as its improved directional nature allows for better
representation of diagonally orientated features within the host image. More textured images like Baboon and Barbara will also tend to have flatter power distributions as they represent a wider range of wavelet coefficients.
For the capacity estimates to be meaningful distortion constraints are imposed
upon both the embedder and the attacker. For the channel model under consideration
the global embedder and attacker distortions across all channels are given as
K
rk k ek = D1
(5.44)
rk k ak = D2
(5.45)
k=1
K
k=1
5.6
107
where is the distortion modifier for channel k dependent upon the orientation and
level of the coefficients in the channel, e and a are the weighted MSE of the attack
and embedding strategy, respectively. Wavelet coefficients are normalised before the
analysis so is 1 in all cases.
The three local distortion constraints placed upon the embedder and attacker
within each channel are
0 ek
(5.46)
ek ak
(5.47)
a k pk
(5.48)
The attack is applied as the additive white Gaussian noise with amplitude scaling
attack (SAWGN). The SAWGN attack involves the application of an optimised mix
of both amplitude scaling and then the addition of AWGN to the watermarked data.
If the attack distortion a is equal to p then the attacker can scale by 0 effectively
erasing the channel completely and so the attack distortion never needs to be greater
than pk . The attack model applied differs from the analysis in [23] in that amplitude
scaling is also applied by the embedder after the watermark has been added making
the watermark process a more general one. But in practical situations embedding
distortion is a small fraction of the original power in a channel, this has little effect
on the results.
The total capacity of all the parallel Gaussian channels and so the images as a
whole is then given by the maximisationminimisation relation shown in (5.49).
C = max min
ek
ak
K
rk CkS AW G N ( pk , ek , ak )
(5.49)
k=1
Equation 5.49 is solved through the application of game theory. The maxmin
relation is viewed as a game across the parallel Gaussian channels where each side
attempts to maximise their advantage in every channel. An optimisation algorithm
is applied to find the saddle point condition where both embedder and attacker are
applying their optimal strategy across all channels and it is no longer beneficial for
either side to alter its strategy. While the optimal attack can be calculated numerically for any given embedding strategy, the embedding strategy must be calculated
through simulated annealing. Allocation of pk , ek and ak as well as per channel
capacity are shown in Fig. 5.28 for the case of a moderate attack applied to the Lena
image and with the DWT decomposition. Graphs using the other wavelet transforms
and images are of similar shape and are neglected here.
As demonstrated in Fig. 5.28 the attacker is able to match and erase most of the
low power channels. As such it is not beneficial for the embedder to concentrate
much watermark strength in these channels. By contrast in the higher power channels the attacker can only afford to allocate a fraction of the channel power. These
are the channels in which it is also beneficial for the embedder to concentrate most
108
Fig. 5.28 Allocations of p, e and a and capacity for channels 1256 of Lena image when decomposed using the DWT and experiencing a moderate attack
of the watermark energy to take advantage of the relatively weak attack applied in
these channels. This runs counter to the strategy employed in many watermarking algorithms where low frequency features are ignored [21]. Capacity results
suggest that this results in a drastic lowering of the possible capacity of watermarking algorithms.
Results for the capacity analysis are given in Table 5.2 as the total capacity of the
image (NC). These results are compared against those obtained by Ghouti for the
DWT [16] and NRCWT [15] when applying the general capacity analysis model
in Table 5.3. It should be noted that the analysis in [15] applies the NRCWT to
four levels of decomposition rather than three, but the low number of coefficients
Table 5.2 Total spread transform data hiding capacities in bits for images of size 512512
D2 = 2D1
D2 = 5D1
Image
D1
NC
NC-Spike
NC
Lena (Daub-8)
Lena (9/7 Linear phase filters)
Lena (DTWT)
Lena (NRCWT)
10
20138
19196
25859
37086
17140
17861
26717
31216
2042
1847
2806
4812
3873
4087
4698
7596
Baboon (Daub-8)
Baboon (9/7 Linear phase filters)
Baboon (DTWT)
Baboon (NRCWT)
25
48580
48300
52229
60324
49858
50142
53038
66044
7953
7813
9315
11842
10970
11077
11991
14894
Peppers (Daub-8)
Peppers (9/7 Linear phase filters)
Peppers (DTWT)
Peppers (NRWT)
10
26963
27026
33304
49523
30064
29702
35121
50151
2581 4574
2470 4668
3741 5944
7370 10876
Barbara (Daub-8)
Barbara (9/7 Linear phase filters)
Barbara (DTCWT)
Barbara (NRCWT)
20
21181
21528
25150
37663
27301
27642
29737
38538
2767
2794
3503
5785
NC-Spike
5166
5112
5452
8703
5.6
109
Table 5.3 General data hiding capacities in bits for images of size 512512 [16]
D2 = 2D1
D2 = 5D1
NC
Image
D1
NC
NC-Spike
Lena (Daub-8)
Lena (9/7 Linear phase filters)
Lena (NRCWT)
10
27664 22080
27233 21714
37512 30979
Baboon (Daub-8)
25
Baboon (9/7 Linear phase filters)
Baboon (NRCWT)
26347 26148
24212 25218
61394 57473
Peppers (Daub-8)
Peppers (9/7 Linear phase filters)
Peppers (NRCWT)
10
19422 20708
16922 17852
44004 33917
3042
2790
7127
4344
3962
6875
Barbara (Daub-8)
Barbara (9/7 Linear phase filters)
Barbara (NRCWT)
20
22840 24495
18289 20026
39045 37118
3683
2868
7041
5475
4531
8081
3677
3651
6061
NC-Spike
4818
4589
6674
4018 5455
3781 5842
12555 11976
in the low-pass level 3 NRCWT subband means that this will have little effect on
the results. Also given are results for the NC-Spike model [23] where a 2-channel
rather than 256-channel model is considered instead.
The subjective levels of distortion allocated to the embedder D1 are the same
as those employed in [23, 17]. These are 10 for Lena and Peppers, 20 for Barbara
and 25 for Baboon. More textured images can tolerate more noise before the noise
becomes visible. The attacker is then allowed to apply two different attack strengths
D2 . These are adjusted relative to the embedding distortion and are 2D1 and 5D1 .
The NRCWT produces the highest capacity estimates. This is a direct result of it
producing more high power channels than the other wavelet transforms. The DTWT
produces the next highest capacity estimates as it still produces more high power
channels than the DWT. This is due to the improved ability of these wavelet transforms to represent the host image in the wavelet domain. Higher power channels
allow for greater robustness against the scaling introduced by the attacker and so
higher per channel capacity.
The Baboon image produces the highest capacity results, followed by the Peppers image and then the Barbara and Lena image. This can be explained by reference to the characteristics produced by the wavelet decompositions of these
images. The large textured areas of the Baboon image produce a lot of large coefficients that lead to many high power and hence high capacity channels. By contrast the smoother images will have smaller coefficients and so fewer high power
channels.
It should be noted that a deficiency of this analysis is that it employs a simplification in that the host data is assumed to be uniform within each spread transform
quantisation cell. Essentially this is the equivalent of regarding the host power pk as
being infinite in each channel; an assumption that leads to an under-estimation of
the true performance of ST watermarking. This deficiency will be addressed in the
next section.
110
1 2
(5.50)
1 + (1 )2
where
= 10(WNR/10)
(5.51)
= 10(DWR/10)
(5.52)
The optimum value of for DC-SS can then be calculated as that which minimises the probability of error:
DCSS
1 + + (1 + + )2 4 2
=
2
1
2
5.6
111
1
C DCSS (, , )
= log2 (1 + S NR DCSS )
2
(5.54)
As shown in Eq. 5.55 the spreading factor r affects the WNR. It also has a corresponding effect on the DWR, effectively decreasing the DWR:
DWRr = DWR1 10 log10 r
(5.55)
At sufficiently low DWR, DC-SS will offer improved levels of performance over
that of standard ST embedding. Table 5.4 shows capacity estimates obtained taking
into account the improved performance offered by DC-SS in the case of low DWR
channels. In all cases where the capacity is increased, the increase is relatively more
significant for the DWT, as it will have more low power channels. However, the
same trend of the NRCWT and the DTWT producing superior capacity estimates
remains.
Table 5.4 Total DC-SS data hiding capacities in bits for images of size 512512
D2 = 2D1
D2 = 5D1
NC
Image
D1
NC
% Increase
10
22923 13.83
22191 15.60
29101 12.58
40079 8.07
Baboon (Daub-8)
Baboon (9/7 Linear phase)
Baboon (DTCWT)
Baboon (NRCWT)
25
49890
49447
53061
60770
Peppers (Daub-8)
Peppers (9/7 Linear phase)
Peppers (DTCWT)
Peppers (NRCWT)
10
31319 16.16
31449 16.37
35938 7.91
50562 2.10
3078
2964
4616
8728
19.26
20.00
23.39
18.43
Barbara (Daub-8)
Barbara (9/7 Linear phase)
Barbara (DTCWT)
Barbara (NRCWT)
20
22742
23168
27270
39360
3149
3184
3915
6371
13.81
13.96
11.76
10.13
2.70
2.37
1.59
0.74
7.37
7.62
8.43
4.51
2380
2164
3321
5560
% Increase
16.55
17.16
18.35
15.54
8830 11.03
8733 11.78
10088 8.30
12408 4.78
112
components as indicated in Section 5.6.3 will usually lead to too great a perceptible
distortion. However, when using the basic MSE distortion metric these perceptible
distortions can be compensated for by neglecting the higher frequency components.
The optimised embedding strategies take no account of the requirement for imperceptibility, concentrating the watermark energy into areas where it may become
visible while neglecting lower power channels that may have higher perceptual limits. For this reason, in this section, the JND models derived earlier in the chapter
are taken into account when applying the capacity analysis, rather than applying the
optimised embedding strategies derived.
The embedder can take perceptual constraints into account by allocating the
embedding strength ek to channels based on a fixed rather than optimal embedding strategy. In addition to the two JND models described earlier, both PSC (power
spectrum condition) compliant watermarking and white embedding are also taken
into account for comparison. The optimal attack is calculated for the fixed embedding strategy used and the capacity is then calculated as detailed earlier. The fixed
embedding strategy is restricted to the same amount of distortion used in Section
5.6.3. The five embedding strategies analysed are:
1. Embedding energy allocated optimally as calculated in Section 5.6.3.
2. Embedding energy allocated proportionally to JND profile derived by Chous
method.
3. Embedding energy allocated proportionally to JND profile derived by Loos
method.
4. Embedding energy allocated proportionally to the original power of the host
channel.
5. Embedding energy allocated evenly across all channels. This creates a flat allocation of embedding distortion across all channels with each receiving the same
amount of embedding distortion regardless of host channel power.
The capacities produced by each of these embedding strategies for all four
images and the different wavelet transforms is shown in Fig. 5.29. For low details
images like Peppers and Lena Chous JND is closer to the optimum embedding
allocation. This is due to the ability of Chous JND to more effectively isolate
edges in the images. By contrast, for higher detail images like Baboon and Barbara Loos JND is closer to the optimal allocation. This is due to the weakness of Chous JND when it comes to modelling the large areas of texture in
these images. However due to being based on the wavelet coefficients, Loos
JND is able to take advantage of the coefficients accurate modelling of textured
regions.
It is also interesting to note that in the cases where Loos JND performs better than Chous JND the white embedding performs better than the PSC compliant embedding. This is due to the flatter host power distributions found in textured
images rather than smoother images which tend to have a peak in the power distribution instead. White embedding better approximates this flat power distribution.
References
113
6000
5000
4000
3000
2000
1000
0
9000
8000
7000
Optimized
Chou
Loo
PSC
White
Lena
Baboon
Peppers
Image
Bits
Bits
9000
8000
7000
6000
5000
4000
3000
2000
1000
0
Barbara
Baboon
Peppers
Image
Barbara
14000
12000
10000
6000
5000
4000
3000
Optimized
Chou
Loo
PSC
White
Bits
Bits
Lena
9000
8000
7000
2000
1000
0
Optimized
Chou
Loo
PSC
White
Optimized
Chou
Loo
PSC
White
8000
6000
4000
2000
Lena
Baboon
Peppers
Image
(c) DTWT
Barbara
Lena
Baboon Peppers
Image
Barbara
(d) NRCWT
Fig. 5.29 Capacity estimates for fixed embedding and optimised embedding strategies
5.7 Conclusion
By applying the principles of spread transform embedding the benefits of both quantisation and spread spectrum are combined in the proposed system. This chapter
has demonstrated theoretically the improved levels of performance offered by the
DTWT and NRCWT combined with the high capacities offered by spread transform embedding. This is due to the higher power channels offered by the NRCWT
and DTWT due to their superior ability to represent the features of the host image.
Further, the analysis clearly shows the areas of the image such as textured and low
frequency components, into which watermarks should be embedded to maximise
the capacity.
Finally, the case of non-iid data was considered as well as the application of fixed
embedding strategies to the theoretical analysis.
References
1. F. Balado, Digital Image Data Hiding Using Side Information, PhD thesis, University of
Vigo, Spain, 2003.
2. J. J. Chae, B. S. Manjunath, A robust data hiding technique using multidimensional lattices,
Proceedings of the IEEE Conference on Advances in Digital Libraries, April 1998.
3. B. Chen, G. Wornell, Digital Watermarking and Information Embedding Using Dither Modulation, Proceedings of the IEEE Workshop on Multimedia Signal Processing (MMSP-98),
pp. 273278, Redondo Beach, CA, USA, December 1998.
114
References
115
Chapter 6
6.1 Introduction
Although biometric-based systems require more challenges (time, hardware,
software, etc.) to crack when compared to traditional systems, several breaches
of security exist and hackers can apply different attacks in order to get illegal
access, especially in remote unattended applications such as e-commerce applications, where they have enough time to make numerous attempts before being
noticed. In addition, biometric-based systems are much more complicated than traditional systems and as a result, there exist several critical points that can be compromised and used to violate the security of such systems. Ratha et al. [1] describe
eight basic types of possible attacks on such systems and the position of each type
of attack in the system is illustrated in Fig. 6.1. These eight types of attack are:
Type 1: a fake biometric is presented to the sensor (e.g. dummy fingerprint, lens
with fake iris, mask face). The attacker creates a copy the biometric of a genuine
user with or without his/her cooperation.
Type 2: also called replay attack because an old digitally recorded biometric data
is replayed into the system bypassing the sensor and this is done with or without
the cooperation of the owner of the biometric data.
Type 3: the feature extractor is attacked with a Trojan horse programme which
produces features chosen according to the hacker s specification.
Type 4: the genuine features extracted by the feature extractor are replaced by
other features (synthesised or real) given by the attacker.
Type 5: the matcher can be attacked with a Trojan horse programme to produce
the desired score of the attacker.
Type 6: an attacker tries to get access to the database in order to insert, modify
or delete the stored templates.
Type 7: the templates are intercepted and tampered with while transmitted from
the database to the matcher.
Type 8: the final score is overridden with the score chosen by the attacker, who
can either allow access to himself or/and other intruders (forcing the score to
accept) or denying the access of legitimate users (forcing the score to reject).
A. Bouridane, Imaging for Forensics and Security, Signals and Communication
Technology, DOI 10.1007/978-0-387-09532-5 6,
C Springer Science+Business Media, LLC 2009
117
118
Type 6
Database
Type 7
Feature
Extractor
Sensor
Type 1
Type 2
Type 3
Matcher
Type 4
Type 5
Yes/No
Type 8
Schneier [2] finds that the biometric data does not provide secrecy because it is
not a secret (e.g. leaving fingerprint impressions on almost everything we touch,
face images can be easily obtained by hidden cameras, iris features can be observed
everywhere we look) and not replaceable (e.g. once someone gets or steals your
biometric data, it remains stolen and cannot be replaced like passwords or cards).
Also, Schneier points out that using the same biometric trait (due to the limited
number of useful biometric traits and available biometric systems) across different
applications makes them unsecure once this biometric trait is stolen (e.g. if someone
uses his fingerprint to start his car, open his office door and read his emails, all these
functions are accessible by an attacker once he/she steals or forges that fingerprint).
In their work, Maltoni et al. [3] describes six threats for a typical biometric-based
system. In circumvention, an attacker gets access to a part of the system protected
by the biometric application. This threat can be cast as a privacy attack, where
an attacker accesses data that he/she has no right to access (e.g. medical records,
personal details), or as a subversive attack, where an attacker can manipulate the
accessed data (e.g. delete some records, changing personal details of other users).
In repudiation, a user denies accessing the system and then claims that an attacker
has circumvented the system (e.g. a corrupt bank clerk can illegally modify some
costumers records and then claim that his biometric was stolen and used by someone else, or he/she can argue that the False Accept Rate (FAR) associated with the
system, allowed an intruder to access his/her account). In contamination (covert
acquisition), an attacker can surreptitiously obtain the biometric data of genuine
users (e.g. lifting latent fingerprint from object, taking face pictures by hidden camera) and use it to construct a digital or physical artifact of that data (e.g. construct
dummy finger using the lifted latent fingerprint, make face mask using the face
pictures). In collusion, a legitimate user with super access privileges, such as system administrator, can be the attacker and illegally modify the system parameters
allowing the access of intruders. In coercion, attackers force legitimate users (e.g. at
gunpoint) to grant them access to the system. In denial of service (DoS), an attacker
corrupts the authentication system to a point where the legitimate users cannot use
it (e.g. an attacker bombards an online server that processes access requests with
large number of requests to a point where it cannot process the requests of genuine
users).
Deployment of watermarking techniques can be useful to increase the security to
the biometric data at different levels. This is achieved by embedding a watermark
6.2
119
signal to the host data to be watermarked such that the watermark signal is unobtrusive and secure in the signal mixture but can partly or fully be recovered from the
signal mixture later on.
For example, watermarking of fingerprint images can be deployed to: (i) protecting the originality of fingerprint images stored in databases against intentional
and unintentional attacks; (ii) fraud detection in fingerprint images by means of
fragile watermarks (which do not resist to any operations on the data and get lost,
thus indicating possible tampering of the data); (iii) guaranteeing secure transmission of acquired fingerprint images from intelligence agencies to a central image
database, by watermarking data prior to transmission and checking the watermark
at the receiver site.
This chapter discusses a comparative study of the generalised Gaussian,
Laplacian and Cauchy models for use in the 1-bit multiplicative watermarking of
fingerprint images. The optimum watermark detection is based on information theory, in which the decision rule is derived by using the maximum-likelihood (ML)
scheme, while the decision threshold is derived by using NeymanPearson criterion.
Such an optimum detection is based on the parameters of a probability distribution
function (pdf) which must model the statistical behaviour of the DWT coefficients
accurately.
In the next section, a brief introduction to watermarking is given. In Section
6.3, state-of-the-art fingerprint image watermarking available in the literature is
reviewed. Then, the problem of optimum detection is formulated based on information theory in Section 6.4. Section 6.5 describes the different distributions used to
model the DWT coefficients namely the generalised Gaussian distribution (GGD),
the Laplacian distribution and the Cauchy distribution. The DWT modelling and the
detection performance are evaluated through extensive experiments of Section 6.6.
Finally, conclusions are drawn in Section 6.7.
(6.1)
Possibly, it may also depend on the host image X into which it is embedded:
W = f 0 (I, K , X )
(6.2)
120
Watermark
W
Image
X
Watermarking
encoder f 1
Watermarked image
Y
Secret key
K
Watermark W and /
or original image X
Watermarked image
Y
Watermarking
decoder f 2
Watermark I
Secret key
K
(6.3)
Design of the corresponding extraction method to recover the watermark information from the signal mixture using the key K and the original image X (see
Fig. 6.3),
I = f 2 (X, Y, K )
(6.4)
I = f 2 (Y, K )
(6.5)
Although, every watermarking system has its own requirements, there is no set
of requirements that meets all watermarking techniques. Nevertheless, some general
requirements can be given for wide range of systems. These requirements are:
Perceptual transparency: is one of the main requirements in most applications.
The data embedding process should not introduce any perceptible artifacts into
the host data. A watermark embedding is truly imperceptible if humans cannot
make the difference between the original and the watermarked data. However, the
modifications introduced by watermarking are only noticeable when the original
data is directly compared against the watermarked data, which is not the case in
most applications since the users of the watermarked data are normally unable
to access the original data. For this purpose, some form of masking is usually
used. For example, in image watermarking system, the Human Visual System
(HVS) characteristics can be used. Similarly, the frequency masking properties
6.2
121
of the Human Auditory System (HAS) can be considered when designing audio
watermarking systems.
Robustness: is another main requirement in the design of many watermarking
applications, especially those requiring a permanent presence of the watermark
in the host data, even if the quality of the host data is degraded through manipulations and modifications which can be applied intentionally or unintentionally. In
the first case, the watermarked data is subject to some data processing to extract
or remove the watermark. In the second case, alterations are introduced without the intention of removing the watermark. For example, applications which
involve data storage or transmission usually use lossy compression to reduce the
size of data. However, lossy compression affects the quality of the watermark
and can remove the embedded watermark. Note that some applications use fragile watermarks for use to prove the authenticity of the host data and as such are
not robust against manipulations, since failure to detect the watermark proves
that the host data has been tampered with and is thereby no longer authentic.
Capacity: refers to the number of bits that can be embedded in an image and
it depends on the application at hand. In the literature, two different types of
watermarking systems can be found. In the first type referred to as 1-bit watermarking, systems embed a specific information or pattern and check the existence
of that information later on in the watermark recovery. This is usually achieved
by employing some hypothesis testing method. The second type referred to as
multi-bit watermarking, embeds an arbitrary information such as serial number,
ID, tracking number, etc., into the host data and a full extraction of the hidden
information is necessary at the watermark recovery stage. A 1-bit watermarking
is sufficient for most copyright-protection applications while a multi-bit watermarking is usually used in applications such as the protection of intellectual property rights, fingerprinting, copy tracking, etc. Although most existing methods or
systems are developed for either watermark extraction or watermark detection, it
should be noted that in fact both approaches are inherently equivalent. A scheme
that considers a 1-bit watermarking can be extended to any number of bits and
the inverse is true [4].
Security: similar to the case of encryption techniques, a watermarking technique
is truly secure if knowing the exact algorithms for embedding and extracting the
watermark does not help an unauthorised party to detect or remove the watermark
[5]. In most cases, the security of a watermarking technique is guaranteed by
using one or several secret and cryptographically secure keys in the embedding
and extraction process. These secret keys can be used to generate the watermark
sequence and/or determine the locations where the watermark is embedded.
Blind vs. non-blind watermarking: in some applications, such as copyright protection and data monitoring, the use of original, unwatermarked data to recover
the embedded watermark is required. This is called non-blind or non-oblivious
watermarking and in this case, the watermark recovery is easier and more robust.
Furthermore, the availability of the original data in the recovery process allows
for the detection and the inversion of the distortions which change the data
geometry. However, access of the original data is not possible in most cases.
122
For example, copy tracking and indexing applications make the recovery process
more difficult. In fact, most recent applications do not require the original image
at the watermark recovery process. This kind of application is referred to as blind
or oblivious watermarking.
These requirements are conflicting and are also related to each other. For instance,
embedding high watermark sequence values leads to robust watermark, but this
introduces large modifications which in turn affect the perceptual quality of the host
data. In addition, the more information bits one wants to embed, the lower is the
watermark robustness.
In order to design a watermarking system that meets the desired requirements,
some criteria are usually used. For instance, to ensure imperceptibility caused by
the watermark embedding, the individual samples used for watermark embedding
can only be modified by an amount relatively small to their average amplitude. Also,
to ensure robustness while still allowing small changes, the watermark information
is usually redundantly distributed over many samples of the host data, thereby providing a holographic robustness. This means that the watermark can be recovered
from a small fraction of the watermarked data, but the recovery is more robust if
more watermarked data is available and used at the recovery stage.
Existing image watermarking algorithms operate either in the spatial domain
[6,7] or in a transform domain such as Discrete Cosine Transform (DCT) [8,9], Discrete Wavelet Transform (DWT) [10,11], Discrete Fourier Transform (DFT) [12,13]
or the Fourier-Mellin Transform [14,15]. While, spatial domain methods are simple
and easy to deploy, embedding in a transform domain is more advantageous especially in terms of visibility and robustness. This is due to its energy compaction
property which suggests that the distortion introduced by the watermark into a number of transform coefficients will spread over all the pixels in the spatial domain so
as the changes introduced in these pixel values are visually less significant.
In the literature, the watermark embedding makes use of either an additive rule
or a multiplicative one. In the former, the watermark is simply added to the host
data, whereas, in the latter, the watermark is inserted with respect to the host data.
The commonly used additive rule is
yi = xi + wi ; i = 1, . . . , N ,
(6.6)
(6.7)
Due to its simplicity, the additive rule is widely adopted in the literature. However, multiplicative watermarking offers a data-dependent watermark casting and
6.3
State-of-the-Art
123
exploits the HVS characteristics in a better way. Nevertheless, most recent additivebased watermarking methods use some perceptual masks obtained by using psychovisual models to take into account the HVS properties.
6.3 State-of-the-Art
There have been few published works on watermarking fingerprint images in the
literature. Pankanti and Yeung [16] proposed a fragile watermarking for fingerprint
image verification. The watermark, in the form of a spatial image, is embedded in
the spatial domain of a fingerprint image by employing a verification key. Before
embedding the watermark, the authors proposed to mix the watermark image to
increase its security level. The proposed method can localise any region of image
that has been tampered with; therefore, it can be used to check the integrity of fingerprint images stored in a database. Experiments were conducted on database of
1,000 fingerprints (4 images each for 250 fingers) and the reported results indicate
that this technique does not lead to a significant performance loss in fingerprint
verification.
Sonia [17] proposed a method to detect any alterations or changes introduced to
an image while being transmitted. The method is based on a local average scheme
where corresponding block by block local average of the transmitted and received
images are compared. The author applied this method to fingerprint and face images.
Ratha et al. [1] proposed a data hiding method for fingerprint images compressed
with the WSQ (Wavelet Scalar Quantisation) compression scheme. The discrete
wavelet transform coefficients are changed during WSQ encoding, by taking into
consideration possible image degradations. This method operates on the quantised
indices to embed the watermark before the final step (Huffman encoder) is applied.
The proposed method here has the advantage of working in compressed domain
which suggests that the distortions introduced by the watermark into a number of
transform coefficients will spread over all the pixels in the spatial domain so as the
changes introduced in these pixel values are visually less significant.
Gunsel et al. [18] described two spatial domain watermarking methods for fingerprint images. The first method utilises gradient orientation analysis for watermark
embedding where the pixel values are changed in a way to keep the quantised gradient orientations around these pixels unchanged. This method is applied before the
feature extraction process. The second method preserves the singular points in the
fingerprint image; hence, it preserves the classification of the watermarked fingerprint image.
Jain and Uludag [19] proposed two application scenarios for hiding biometric
data. The basic data hiding is based on amplitude modulation watermarking method
and it is the same for both applications. The first scenario involves steganographybased application: the biometric data (fingerprint minutiae) that need to be transmitted are hidden in a host image. However, in this scenario, the security of the
system is based on the secrecy of the communication. The second scenario is based
on hiding facial features, i.e. Eigen-face coefficients, into fingerprint images.
124
6.4
125
Threshold
Watermark
exist
No watermark
False alarm
Missed detection
commonly used correlation-based detectors are only optimal in the additive case by
assuming that the host data follows a Gaussian model [24, 25].
Recently, the watermark detection is considered as a binary hypothesis test where
the problem is one of taking measurements and then estimating the results in which
of the finite number of states an underlying system resides. More precisely, the
system is the possibly watermarked image and the system observation variable is
the set y = {y1 , . . . , y N } of the possibly watermarked coefficients. Two hypotheses
can be defined: the given image is watermarked by the candidate watermark w =
{w1 , . . . , w N } (hypothesis H1 ) or the given image does not contain the candidate
watermark!(hypothesis H0 ). Consequently, the watermark space can be defined as
W = W0 W1 , where W1 = {w } and W 0 = {w j = w }, including w = 0 that
corresponds to the case where no watermark exists. The issue now is to define a test
of the simple hypothesis H1 versus the composite alternative H0 that is optimum
with respect to certain criterion.
The likelihood-ratio test, which is a statistical test for making a decision between
two hypotheses based on the value of the likelihood ratio, is adopted. The likelihood
ratio, usually denoted by (y), is defined as
(y) =
f y (y|W1 )
f y (y|W0 )
(6.8)
where f y(y|W 1), f y (y|W 0) are the pdfs of the set y conditioned to W1 and
W0 , respectively. By relying on the fact that the coefficients in y are statistically
independent,"
the pdf of y conditioned to an event
" N W1 and W0 can be written as
N
f yi (yi |W1 ) and f y (y|W0 ) = i=1
f yi (yi |W0 ).
f y (y|W1 ) = i=1
Assuming that the watermark components are uniformly distributed in [1, +1],
W0 is composed by an infinite number of watermarks. Therefore, by using the total
probability theorem [26], the pdf f y (y|W0 ) can be written as:
f yi (yi |W0 ) =
+1
1
(6.9)
126
where f wi =
6
1
2
(y) =
1
2N
(6.10)
Using the multiplicative rule given by Eq. 6.7, the pdf f yi (yi |wi ) of a watermarked coefficient yi conditioned to a watermark value wi is given by
f yi (yi |wi ) =
1
y1
f xi (
)
1 + wi
1 + wi
(6.11)
(y) =
N
$
i=1
yi
f xi ( 1+w
)
1
i
1 + wi f yi (yi )
(6.12)
The decision rule reveals that the hypothesis H1 is accepted if and only if (y)
exceeds certain threshold . Further simplification can be made by taking the loglikelihood ratio, which is defined as the natural logarithm of the likelihood ratio,
l(y) = ln((y)), so the decision rule becomes
l(y) =
N
i=1
[ln( f xi (
yi
1
)) ln( f xi (yi ))] H
H0
1 + wi
(6.13)
N
(6.14)
where fl(y) (l(y)) is the pdf of l(y) conditioned to W0 . The variable l(y) is a sum of
statistically independent terms. Therefore, by using the central limit theorem [26],
it can be modelled by a Gaussian distribution with mean 0 = E[l(y)|W0 ] and
variance i2 = V ar [l(y)|W0 ]. Finally, the PF A can be written as
6.5
PF A =
+
l(y) 0 2
exp
dl(y)
202
2 02
1
127
(6.15)
0
1
er f c
2
2 2
0
where
erfc(.) is the complementary error function, given by er f c(x) =
#
2 + t 2
e
dt. By fixing the value of PF A , the threshold can be obtained using
x
the equation
= er f c1 (2PF A ) 202 + 0
(6.16)
128
exp
f X (xi ; , ) =
2(1/)
(6.17)
#
where (.) is the Gamma function, (z) = 0 et t z1 dt, z > 0. The parameter is
referred to as the scale parameter and it models the width of the PDF peak (standard
deviation) and is called the shape parameter and it is inversely proportional to the
decreasing rate of the peak (see Fig. 6.5). Note that = 1 and = 2 yield Laplacian
and Gaussian distributions, respectively. The value = 0.5 is widely used in the
literature; however, an accurate estimate of the parameters and can be found as
described in [31]. By replacing the pdf of the GGD in Eq. 6.13, the detector can be
defined by
l(y) =
N
|yi | i
1 |1 + wi |i
i
i=1
(6.18)
N
1
[1 |1 + wi |i ]
i
i=1
(6.19)
6.5
129
and
02 =
N
1
[1 |1 + wi |i ]2 .
i
i=1
(6.20)
Laplacian model is simpler than the generalised Gaussian one since the latter
requires the use of interpolation methods to estimate the shape parameter. It has
been used to model the DWT coefficients in [32,33]. In this chapter, the Laplacian
pdf is obtained by letting = 1 in Eq. 6.17. Also, the Laplacian detector can be
obtained by substituting by 1 in equations Eq. 6.18, 6.19, and 6.20.
(6.21)
1
2 + (x )2
(6.22)
where ( < < ) corresponds to the location parameter, and ( > 0) represents the scale parameter, also known as the dispersion parameter. The peak shape
of a Cauchy distribution is controlled by . The smaller the value of, the narrower
is the peak shape and vice versa (see Fig. 6.6.)
130
The two parameters and can be estimated from the data set using the consistent ML method described by Nolan [35], which gives reliable estimates and
provides the most tight confidence intervals. The use of the Cauchy distribution in
Eq. 6.13 leads to the following watermark detector:
l(y) =
N
ln( + (yi ) ) ln +
2
i=1
yi
1 + wi
2
(6.23)
For the sake of simplicity, the mean 0 and the variance 02 are estimated numerically by evaluating l(y) for n fake sequences {w j : w j [1, +1]; 1 j n}, so
that the estimated mean and variance of l(y) are given by
n
1
lj
n j=1
(6.24)
1
(l j 0 )2
n 1 j=1
(6.25)
0 =
02 =
6.6
Experimental Results
131
(a)
(b)
(c)
(d)
Fig. 6.7 Test images with different visual quality: (a) Image 22 1: good quality with normal
ridges area, (b) Image 83 1: good quality with large ridges area, (c) Image 43 8: small ridges
area (latent fingerprint) and (d) Image 68 7: poor quality
132
pi
qi
(6.26)
The results obtained are reported in Table 6.1 and clearly show that the GGD provides the smallest KL divergence for all images. On the other hand, this divergence
is larger when using a Cauchy model.
A QQ plot is a graphical technique for determining if two data sets are generated from populations having a common distribution. It is a plot of the quantiles
of the first data set against the quantiles of the second data set. If the two data sets
are taken from two populations with the same distribution, the points should fall
approximately along a reference line. The greater the departure from this reference
line, the greater the evidence that the two data sets have been generated from populations with different distributions. In our experiments, for a given fingerprint image
we first estimate the parameters for each model from the DWT coefficients and then
generate a large number of random samples drawn from the corresponding model
having the estimated parameters. The quantiles of the real DWT coefficients against
the quantiles of the random generated samples are plotted.
For the QQ plot corresponding the GGD, most of the + marks have a straight
line shape for Image 22 1 (Fig. 6.8a) and Image 86 7 (Fig. 6.9b), deviating slightly
from the reference line for Image 83 1 (Fig. 6.8b) but with more significant deviation for Image 43 8 (Fig. 6.9a). For the Laplacian model, most of the + marks of
Table 6.1 KL divergence of the high-resolution DWT subbands obtained using Daubechies 9/7
wavelet at the 3rd level. LH : horizontal subband; LH: vertical subband; HH: diagonal subband.
GGD
Laplacian
Cauchy
Image 22 1
Image 83 1
Image 43 8
Image 86 7
0.0582
0.1267
0.1741
0.0661
0.1808
0.2332
0.1530
0.7098
0.0893
0.0376
0.1224
0.1542
6.6
Experimental Results
133
400
200
0
200
DWT Coefficients Quantiles
400
600
(a)
200
200
400
600
200
400
600
(c)
(d)
400
200
200
(e)
800
(b)
400
600
400
600
200
400
600
800
800
(f)
Fig. 6.8 QQ plots of DWT coefficients of sample images (Left: Image 22 1, Right: Image
83 1) for different models (Top: GGD; Middle: Laplacian; Bottom: Cauchy.)
134
6
GGD QQ Plot for Image 43__8
600
800
300
(a)
(b)
200
400
600
800
100
200
300
(c)
(d)
200
400
(e)
(f)
600
400
400
800
Fig. 6.9 QQ plots of DWT coefficients of sample images (Left: Image43 1, Right: Image
86 7) for different models (Top: GGD; Middle: Laplacian; Bottom: Cauchy.)
6.6
Experimental Results
135
the QQ plot follow a straight line but with a significant deviation from the reference
line for Image 22 1 (Fig. 6.8c), Image 83 1 (Fig. 6.8d) and Image 86 7 (Fig. 6.9d).
However, the marks follow a curve shape for Image 43 8 (Fig. 6.9c). For the Cauchy
model, the + marks in the plot have also curve like shape which does not follow
a straight line for all test images (Figs. 8e,f and 9e,f). In conclusion, the QQ plots
for all fingerprint images show that the GGD provide the best fit for the DWT coefficients.
The results obtained for modelling the DWT coefficients reveal that the detector
based on the GGD is expected to yield better watermark detection performances
than those based on Laplacian and Cauchy models. Moreover, the Laplacian is
expected to provide good/acceptable detection results. It is worth noting that none of
the three used distributions model accurately the coefficients distribution of Image
43 8. The reason is that in this image, the region of interest (or the ridges area) is
somewhat small when compared to the overall size of the image (i.e. most of it is
composed by smooth area or background).
136
(a)
(b)
(c)
(d)
Fig. 6.10 Difference image between the original image and its corresponding watermarked one:
(a) Image 22 1, (b) Image 83 1, (c) Image 43 8 and (d) Image 68 7
the true fidelity because it is well known that the human visual system is less sensitive to changes in such regions compared to smooth and non textured areas. For
instance, the PSNR, as a perceptual model, suggests that the watermarked image
of Image 43 8 should be perceptually better than the watermarked image of Image
83 1; however, the watermarked image 43 1 shows more visible distortions when
compared to the watermarked Image 83 1.
6.6.2.2 Detection Performance
In order to evaluate the performance of the detectors, the test images were watermarked using = 0.10. The Receiver Operating Characteristics (ROC) curves,
which are widely adopted in the literature, were used to assess the performance
6.6
Experimental Results
137
Image 22__1
Image 83__1
Image 43__8
Image 86__7
0.10
0.15
0.20
0.25
0.30
Watermark Strength
0.35
0.40
0.45
of the detectors. The ROC curves represent the variation of the probability of true
detection (PDet ) against the probability of false alarm (PF A ). A perfect detection
yields a point at coordinate (0,1) of the ROC space meaning that all given watermarks were detected without any false alarm. The theoretical false alarm is set to
the range 104 to 101 . The experimental ROC curves are computed by measuring
the performance of the actual watermark detection system by calculating the probability of detection from real watermarked images. Experiments are then conducted
by comparing the likelihood ratio with the corresponding threshold for each value
of the false alarm probability and for 1000 randomly generated watermarks. If the
likelihood ratio is above the threshold under H1 , the watermark is detected and if it
is above the threshold under H0 , a false alarm occurs.
A blind detection is used so that the parameters of each detector are directly
estimated from the DWT coefficients of the watermarked image. It is worth noting
that the optimal parameter values for both the GGD and the Cauchy distribution
may be different for each DWT coefficient, but for practical purposes a constant
value over all coefficients suffices. The results on the sample images are plotted
in Fig. 6.12.
As can be seen and as it was expected for all images, the performance of the
GGD detector is significantly better than that obtained for the Laplacian and Cauchy
detectors. In addition, the Laplacian detector provides close results to the GGD one
for images (Image 22 1, 83 1 and 86 7). It is worth noting that all detectors generate
very high false alarm for Image 43 8. In general, images with large and well-defined
ridge area provide good detection performance because such images have higher
138
GGD
Laplacian model
Cauchy model
GGD
Laplacian model
Cauchy model
102
101
Probability of False Alarm
100
102
101
Probability of False Alarm
(a)
100
(b)
GGD
GGD
Laplacian model
Cauchy model
101
Laplacian model
Cauchy model
100
102
101
(c)
(d)
100
Fig. 6.12 Difference image between the original image and its corresponding watermarked one:
(a) Image 22 1, (b) Image 83 1, (c) Image 43 8 and (d) Image 68 7
6.7 Conclusions
This chapter addresses the 1-bit multiplicative watermark detection for fingerprint
images. Watermarking technique can be a solution for securing fingerprint data
and thwart against some attacks that may affect the reliability and the secrecy of
fingerprint-based systems. Watermarking system can be divided in two main processes: embedding and detection. In this chapter, we have focussed on the watermark detection which aims to detect whether a given watermark was embedded into
the host data. The problem of detection is formulated theoretically based on a ML
References
139
estimation scheme requiring an accurate statistical modelling of the host data. This
theoretical formulation allows for the derivation of optimal detector structures; this
optimality of the detector structure depends on the accuracy of the statistical distribution used to model the statistics of the host data.
The watermark is embedded into the DWT domain because the ridges and textures are usually well confined to the DWT coefficients of the high-frequency subbands. In addition, watermarking in the DWT is very robust to compression methods
such as WSQ compression which is the standard adopted by the FBI and many other
investigation agencies.
First, the modelling of the DWT coefficients is carried out to determine the best
model. The generalised Gaussian, Laplacian and Cauchy models were investigated
and compared and the experimental results reveal that the GGD distribution provides
the best model that can represent the distribution of the DWT coefficients.
Then, the structures of the optimum detector of the three models were derived
and the performance of the detectors were assessed through extensive experiments.
It has been found that the detector based on the GGD outperforms the Laplacianbased detector, which in turn, significantly outperforms the Cauchy detector. The
overall performance of the detectors is dependent on the fingerprint characteristics.
This dependence is related to the size of the ridge area relative to the size of the
fingerprint image. The bigger the ridge area, the higher the detection performance.
References
1. N. K. Ratha, H. H. Connell and R. M. Bolle, An analysis of minutiae matching strength, In
The 3rd International Conference on Audio-and Video-Based Biometric Person Authentication (AVBPA2001), vol. 2091, pp. 223228, 2001.
2. B. Schneier, The uses and abuses of biometrics, Communications of the ACM The
3rd International Conference on Audio-and Video-Based Biometric Person Authentication
(AVBPA2001), vol. 42, no. 8, pp. 136, August 1999.
3. D. Maltoni, D. Maio, A. K. Jain and S. Prabhakar, Handbook of Fingerprint Recognition,
Springer, New York, 2003.
4. N. Hartung and M. Kutter, Multimedia watermarking techniques, Proceeding of IEEE, vol.
42, no. 8, pp.10791107, 1999.
5. M. D. Swanson, M. Kobayashi and A. H. Tewk, Multimedia data-embedding and watermarking technologies, Proceeding of IEEE, vol. 86, pp. 10641087, 1998.
6. M. Yoshida, T. Fujita and T. Fujiwara, A new optimum detection scheme for additive watermarks embedded in spatial domain, International Conference on Intelligent Information Hiding and Multimedia Signal Processing (IIH-MSP 2006), pp. 101104, December 2006.
7. I. G. Karybali and K. Berberidis, Efficient spatial image watermarking via new perceptual
masking and blind detection schemes, IEEE Transaction Information Forensics and Security,
vol. 1, no. 2, pp. 256274, June 2006.
8. J. R. Hernandez, M. Amado and F. Perez-Gonzales, Dct-domain watermarking techniques
for still images: Detector performance analysis and a new structure, IEEE Transactions on
Image Processing, vol. 9, no. 1, pp. 5568, January 2000.
9. A. Briassouli, P. Tsakalids and A. Stouraitis, Hidden messages in heavy-tails: Dct-domain
watermark detection using alpha-stable models, IEEE Transactions on Image Processing,
vol. 7, no. 4, pp. 700715, August 2005.
140
10. T. M. Ng and H. K. Garg, Wavelet domain watermarking using maximum-likelihood detection, Proceeding of SPIE Security, Steganography, and Watermarking of Multimedia Contents, vol. 5306, pp. 816826, June 2004.
11. F. Kheli, A. Bouridane, F. Kurugollu and I. Thompson, An improved wavelet-based image
watermarking technique, Proceeding of IEEE International Conference on Advanced Video
and Signal Based Surveillance (AVSS2005), pp. 588592, August 2005.
12. M. Barni, F. Bartolini, A. De Rosa and A. Piva, A new decoder for the optimum recovery of
nonadditive watermarks, IEEE Transaction on Image Processing, vol. 10, no. 5, pp. 755765,
May 2001.
13. Q. Cheng and T. S. Huang, Optimum detection and decoding of multiplicative watermarks in
dft domain, Proceeding of IEEE International Conference on Acoustic, Speech, and Signal
Processing (ICASSP2002), pp. IV3477IV3480, May 2002.
14. J. J. K. Ruanaidh and T. Q. Pun, Rotation, scale and translation invariant spread spectrum
digital image watermarking, Signal Processing, vol. 66, no. 3, pp. 303318, 1998.
15. C. Y. Lin, M. Wu, J. A. Bloom, I. J. Cox, M. Miller and Y. M. Lui, Rotation, scale, and translation resilient public watermarking for images, IEEE Transactions on Image Processing,
vol. 10, no. 5, pp. 767782, May 2001.
16. S. Pankanti and M. M. Yeung, Verication watermarks on fingerprint recognition and
retrieval, Proceeding SPIE, Security and Watermarking of Multimedia Contents, vol. 3657,
pp. 6678, 1999.
17. S. Jain, Digital watermarking techniques: A case study in fingerprints and faces, Proceeding
of Indian Conference on Computer Vision, Graphics and Image Processing, pp. 139144,
2000.
18. B. Gunsel, U. Umut and A. M. Tekalp, Robust watermarking of fingerprint images, Pattern
Recognition, vol. 35, no. 12, pp. 27392747, 2002.
19. A. K. Jain and U. Uludag, Hiding biometric data, IEEE Transactions on Pattern Analysis
and Machine Intelligence, vol. 25, no. 11, pp. 14941498, November 2003.
20. F. Ahmed and I. S. Moscowit, Composite signature based watermarking for fingerprint
authentication, Proceeding of 7th Workshop Multimedia and Security, pp. 137142, 2005.
21. F. Ahmed and I. S. Moscowit, A correlation-based watermarking method for image authentication application, Optical Engineering Journal, vol. 43, no. 8, pp. 18331838, 2004.
22. K. Zebbiche, L. Ghouti, F. Kheli and A. Bouridane, Protecting fingerprint data using watermarking, Proceeding of the 1st AHS conference, pp. 451456, June 2006.
23. M. K. Khan, L. Xie and J. Zhang, Robust hiding of fingerprint-biometric data into audio
signals, Proceeding of the 2nd International Conference on Biometrics (ICB2007), vol.
4642/2007, pp. 702712, August 2007.
24. G. F. Elmasri and Y. Q. Shi, Maximum likelihood sequence decoding of digital image
watermarks, Proceeding of SPIE Security and Watermarking of Multimedia Contents
pp. 425436, 1999.
25. Q. Cheng and T. S. Huang, An additive approach to transform-domain information hiding and
optimum detection structure, IEEE Transaction on Multimedia, vol. 3, no. 3, pp. 273284,
September 2001.
26. A. Papoulis, Probability, Random Variables, and Stochastic Processes, McGraw-Hill, New
York, 1991.
27. J. V. Di Franco and W. L. Rubin, Radar Detection, SciTech Publishing, Raleigh, January
2004.
28. T. Ferguson, Mathematical Statistics: A Decision Theoretical Approach, Academic Press,
New York, 1967.
29. X. G. Xia, C. G. Bonklet and G. R. Acre, Wavelet transform based watermark for digital
images, Optics Express, vol. 3, no. 12, pp. 497511, December 1998.
30. G. C. Langelaar, I. Styawan and R. L. Lagendijk, Watermark digital image video data:
A state-of-art overview, IEEE Signal Processing Magazine, vol. 17, no. 5, pp. 2046,
September 2000.
References
141
31. M. N. Do and M. Vetterli, Wavelet-based texture retrieval using generalized Gaussian density
and Kullback-Leibler distance, IEEE Transaction on image processing, vol. 11, no. 2, pp.
146158, February 2002.
32. Y. Hu, S. Kwong and Y. K. Cha, The design and application of dwtdomain
optimum decoders, In First International Workshop, IWDW2003, vol. 2613/2003,
pp. 2528, 2003.
33. T. M. Ng and H. K. Garg, Maximum likelihood detection in dwt domain image watermarking
using Laplacian modeling, IEEE Signal Processing Letters, vol. 12, no. 4, pp. 285288, April
2005.
34. G. Tzagkarakis and P. Tsakalides, A statistical approach to texture image retrieval via alphastable modeling of wavelet decomposition, In 5th International Workshop on Image Analysis
for Multimedia Interactive Services (WIAMIS), pp. 2123, 2004.
35. J. P. Nolan, Maximum Likelihood Estimation and Diagnostics for Stable Distributions,
Technical report, American University, Washington June 1999.
36. R. Cappelli, D. Maio, D. Maltoni, J. L. Wayman and A. K. Jain, Performance Evaluation
of Fingerprint Verification Systems, IEEE Transactions on Pattern Analysis Machine Intelligence, vol. 28, no. 1, pp. 318, January 2006.
Chapter 7
143
144
Each time a person takes a step, there is no doubt that some sort of interaction
between his (her) shoes and the surface occurs. This could be a deformation of
that surface or the exchange of trace materials and the residue from the shoe to the
surface. In the case where the surface is deformable e.g. snow and sand, a threedimensional impression is created as a result of the pressure exerted on that surface.
When the surface is solid a visible pattern may still be transferred to the surface
from a sole. This is as a result of an exchange of materials between the shoe and the
surface. It should be noted that not all of the shoemarks are visible or detectable with
the limit of the current technologies, but the chances are excellent that a great number of them will be. The author [6] claimed that there should be equal and perhaps
even greater chance that footwear impressions could be present at a crime scene,
compared with the presence of latent fingerprints. So far, the later has been widely
accepted as a powerful tool in forensic applications, while the footwear impressions
also are being realised as a potential assist in forensic investigations. A study in [2]
suggests that footwear impressions could be located and retrieved at approximately
35% of all crime scenes. Statistics from Home Office of United Kingdom show that
14.8% of crime scenes attended by crime scene investigators in 20042005 yielded
shoe print evidence. The crimes investigated consisted primarily of burglaries. It
is also reported that by emphasising the potential evidence of shoemarks to crime
scene personnel and by teaching the basics of locating and recovering footwear
impressions, the percentage of cases in which footwear impression evidence was
now being submitted to the laboratory had increased from less than 5% to approximately 60% [6]. Figure 7.1 shows some examples of shoemarks recovered from
different crime scenes.
7.1
145
Fig. 7.1 Examples of shoeprint images retrieved from scene of crimes (Foster & Freeman Ltd.)
146
7.1
147
Currently only about 14% of shoemarks collected from a crime scene are actually examined and a much smaller number of these actually identified. The best
existing systems only recognise about 25% of recovered shoemarks which is only
about 3.5% of the total available shoemarks at crime scenes [10]. The statistic provided relates to shoemarks collected and examined in Holland. Informal conversation with British forensic scientists suggests that figures in the UK are roughly
the same.
There are several reasons why more impressions are not identified. They fall
mainly into one of three categories: inconsistent classification, importable classification schema and insufficient recognition time.
148
7.2
149
forces in Europe use their own proprietary classification scheme and in some countries they use more than one. In an age where there is very little restriction on where
people can travel, it has become ever more desirable that laboratories in different
locations can quickly and efficiently share data. For example if a crime is committed in Holland and the suspect crosses the border into Belgian, the Dutch police
force has no way of sharing their shoemark intelligence with the Belgian police
force other than to share the original shoemark and allow the Belgium police to
reclassify it. The problem of proprietary classification schemes has not been solved
but merely exacerbated. In addition where different laboratories use the same classification schema they are still unlikely to produce the same classification when given
identical shoemarks to classify [12].
150
tools being developed in image processing and computer vision could be applied to
the problem of shoemark classification and identification. The use of image processing in real world situations outside of computer science is also increasing. This
includes its use in fingerprint and DNA identification. Unfortunately due to the
nature of shoemark evidence, in particular the fact that it is often only suitable to
be used as corroborative evidence in court [13], there has been less interest in the
media and scientific community about it than other areas of forensic imaging.
7.2
151
Surface
Carpet
Dirty Floor with
accumulation of
dust, dirt or
residue.
Relatively clean but
unwaxed floor.
Clean Waxed Tile or
Wood Floor
Waxed Bank Counter,
Desk Top, etc.
Glass
Kicked in Door
Paper, Cardboard.
Shoe with
blood, grease,
oil
Unlikely
Likely
Very likely
Very likely
Likely
Unlikely
Unlikely
Unlikely
Likely
Very likely
Very likely
Unlikely
Likely
Very likely
Very likely
Likely
Likely
Very likely
Very likely
Likely
Very likely
Very likely
Very likely
Very likely
Very likely
Very likely
Very likely
Very likely
Very likely
Likely
Likely
Likely
Table 7.2 Detection of footwear marks after walking on various floor surfaces for five minutes [3]
Premises
Floor type
Area
Footware mark
Household Kitchen
Fish and Chip shop
Vinyl
Tiles
Detected
Detected
Sandwich Bar
Tiles
Butchers Shop
General surfaces
Customer area and
behind counter
Customer area and
behind counter
Customer area
Dining room
Living room
General surfaces
Oily area
General surfaces
Nothing detected
Nothing detected
Detected
Detected
Nothing detected
House
House
Kitchen
Garage
Office
Detected
Detected
152
faster films that may appear grainy. The use of high-resolution film is necessary
as the characteristics that the forensic examiner may be interested in are often so
small that they are not visible to the naked eye [7]. The use of black and white
film is often preferred because some experts believe that the additional layers of
emulsion in coloured films result in a lower contrast, less easily examined image.
This consideration is still very important even when using computerised systems.
Although the digitisation of shoemark photographs before they are entered into a
computerised system results in a loss of resolution, wherever possible the original
images will be presented as evidence in court.
When photographing a shoemark a scale is positioned adjacent to, and on the
same plane as, the shoemark. This allows accurate measurements of the shoemark
to be made in the laboratory, or allow easier verification that the shoemark was
printed at 1:1 scale.
A label identifying the impression, orientation and location is also placed within
the image frame. This process minimises the possibility that different shoemarks
photographed in the same location become confused with each other. It also helps
to provide substantiation of continuity of evidence.
The physical process of taking the photograph requires that the camera be positioned, mounted on a tripod directly over the impression. The films plane should be
adjusted so that it is parallel to the plane of the shoemark. This helps to minimise the
amount of perspective distortion. The frame in the cameras viewfinder is adjusted
so that it includes the shoemark, the scale and the label and the camera focus is set
to the shoemark and not the scale as often the scale is at a slightly different focal
depth.
Wherever possible strong sources of ambient lighting are disconnected to remove
shadow. When flash is needed it is diffused or positioned at a distance from the
shoemark so that it does not result in unwanted glare.
Plaster of Paris is a mixture of powdered and heat-treated gypsum that when mixed with water
flows freely but hardens to a smooth solid as it dries.
2 Dental Stone is a plyable material used in the dental industry for making impressions of teeth
and gums.
3 SnowPrint Wax is a commercial product used to add strength to an impression left in snow.
7.2
153
If any lightweight debris has fallen into the impression since it was created it
should be removed with tweezers. Care should be taken not to remove any extraneous matter that is part of the impression itself.
In the case where the impression has been made in loose sandy soil a fixing agent
such as hairspray may be used as a means of binding loose particles together prior
to casting.
The liquid casting material should be poured into the impression from a height
of only a few centimetres; this helps prevent damage to the surface of the impression. The material should be poured from a position to the side of the part of the
impression caused by the shoe arch. This area contains the least useful information
when classifying the shoemark. The liquid should be poured until the mixture overflows the impression and onto the ground surface. As the material starts to harden
information pertinent to the case should be scratched into the cast surface. The cast
should be left for at least 20 min to dry in warm weather, longer when it is cold. The
cast may then be lifted using a thin bladed spatula inserted into the soil well beneath
the cast.
The cast should then be left to air dry for at least 48 h before it may be cleaned
of extraneous soil by soaking the cast in a solution of potassium sulphate.
In the case of a serious crime the cast may be stored and kept as evidence. However, it is more common that the cast is photographed and the photograph be used as
evidence. A technique often used when photographing a cast is to use oblique rather
than direct lighting. This throws a slight shadow that brings certain detail in the cast
into sharp relief helping the examiners to see small details.
154
The power supply generates an electric field, the stronger the field the stronger
the devices ability to attract a dust mark. The devices are often used to recover
shoemarks left on paper, linoleum, wood, carpet and concrete but cannot be used to
collect shoemarks from wet surfaces.
The image resulting from electrostatic lifting is the negative of the impression,
i.e. the contact between the shoe sole and the substrate removed something (usually
dust) and the collection process has collected what was not removed.
7.2
155
equipment consists of a chemical drenched sponge and paper that is reactive to that
chemical.
The shoe is pressed into the sponge ensuring that the toe and heel are coated, this
may require that the shoe is rocked end to end, and then pressed on to the reactive
paper. After a few minutes an image of the shoemark will develop on the paper.
This technique is comparatively cheap and simple but is only useful if the actual
physical shoe is available.
156
Unless the investigators have a suspect in custody the shoemark will be checked
against the database and a small number of the most likely matches selected. These
preliminary matches will then be passed to the forensic scientist for analysis and
hopefully a close match at this point will provide a suspect.
The process involved for 2, 3 and 4 each require that the shoemark is classified
and the classification is used to search the database. As such, these processes are all
subject to the problems associated with inconsistent classification.
The procedure used for classification and identification differs slightly in
detail between laboratories but generally follows the following sequence of
events:
The shoemark is inspected visually and analytically measured.
The descriptors that best match the pattern are selected and recorded.
Sometimes the shoemark is split into three sections, the heel, arch and toe. The
heel and toe may be classified separately while the arch may be ignored as it rarely
contains any useful information.
When comparing an unknown shoemark with a suspects shoe the procedure used
will be similar to the following. For manual shoemark recognition the examiner
needs
The shoemark (the original mark, a 1:1 scale photograph or a cast) from the crime
scene.
The suspects shoe or a second shoemark from either a photograph or cast.
When this information is available the examiner proceeds with the following
sequence:
7.3
157
Component Descriptors
A
B
C
D
E
F
G
H
I
J
L
M
N
Q
R
PLAIN/RE-HEEL
RANDOM/IRREGULAR
LATTICE/NETWORK
STRAIGHT
CURVED/WAVY
ANGLED/ZIG-ZAGGED
CIRCULAR - forming basic pattern or a section thereof
CIRCULAR interspersed with other components
ANY OTHER SHAPE
GEOMETRIC with three to six straight sides
ANY OTHER SHAPE
LETTERS/NUMERALS as part of a name or number
TARGETS concentric circles, ovals etc or part thereof
COMPLEX/DIFFICULT
Same descriptor applies to different principal components.
Fig. 7.4 Shows the key features of two shoemarks and in Fig. 7.5 the relative importance of each
section the classification of the mark are indicated. Under each shoemark the Birkett classification
is given
158
Fig. 7.5 Indicates the key features of two shoemarks. The classification of these features is shown
in Fig. 7.6. This diagram is reproduced from the Scottish Police Detective Training manual
system. A second way of classifying shoemarks is to look for and record accidental
characteristics. An example of each is shown in Figs. 7.4 and 7.5.
7.3
159
Fig. 7.6 Shows how the shoemark may be divided into four sections for classification. Each
section has a classification priority indicated. The hand written letters below each shoemark show
the Birkett classification. This diagram is reproduced from the Scottish Police Detective Training
manual
160
characteristics, i.e. characteristics caused by damage and wear to the shoe sole.
This type of characteristic is very important when trying to identify a particular
instance of a shoe sole. This is because it is only the accidental characteristics that
differ between shoe sole instances, i.e. all shoe sole leave the factory more or less
identical and it is the random damage that occurs during normal wear that results
in differing patterns. The shoemarks are still classified manually and stored by classification in a computer database. When searching the database accidental characteristics identified on the shoemark are used as well as the standard classification
patterns.
7.4.1 SHOE-FIT
SHOE-FIT is one of the earliest computerised shoemark databases, developed by
Sawyer and Monckton [15]. It is based on another existing system developed by
Birkett, which codes the shoeprint patterns with a number of letters, followed by a
numerical sequence. SHOE-FIT prefixes the coded letter with 2 numerical digits for
the year and suffixes it with a 3-digit number to uniquely identify a shoemark. A
typical code is as follows: 94FNM011, which means that the footwear is from 1994,
and has a zigzag (F) target (N) and letters or numbers (M). SHOE-FIT also concerns
itself with transferring a footwear impression to a PC in terms of various of forms,
such as faxing, scanning and photographing, and with combining a number of tools
for image handling, like format conversion, rotation, resizing, masking and so on.
Apart from these, the authors also identify the consistency and the compatibility
of any coding system of shoemarks as important.
7.4.2 SHOE
SHOE is a shoeprint capturing, recording, retrieving system developed by Victoria Forensic Science Centre [4]. The system comprises two parts: SHOEView, and
7.4
161
SHOEAdmin, and has 4,000 shoemarks in its database. SHOE codes a shoemark
based on a manual identification and recording of patterns found in that shoemark.
Some of the popular patterns are categorised into different groups, and can be displayed on the screen for reference when one records and searches a shoemark. One
of the attractive points for this system is that it combines the position information
into the processes of recording and retrieving by dividing a shoemark into four parts:
Toe, Ball, Instep, and Heel. This can increase the accuracy of the searching process.
Another advantage is that each of these partitions can be separately classified and
searched against independently, so it is possible to search for images that share
only characteristics seen in any combination of the four partitions. In this way their
system is able to search for partial shoemarks.
7.4.4 REBEZO
REBEZO was designed by Geradts et al. in the National Forensic Science Laboratory of the Ministry of Justice in the Netherlands with the cooperation of the Dutch
Police. Similar to the systems described above, shoemarks in this system are also
classified using a set of pattern descriptors that the investigator selects from. One of
the problems with this system, like that of other manual systems, is the inconsistent
classification, which motivated Geradts et al. to develop an automatic classification
approach using Fourier analysis and a neural network system [10]. It thresholds
a shoemark first, and then applies some morphological techniques to segment the
patterns of the image, before the Fourier descriptors and the moments of each pattern are imported into a neural network system for classification. However, their
experimental results [9] suggested that this attempt was not able to give a sound
classification because of the unreliable segmentation, caused by noise and artefacts
in a shoemark.
162
7.4.5 TREADMARK TM
TREADMARKTM is a shoemark analysis and identification system developed by
CSI Equipment Ltd. The manufacturer of this system claimed it as the only system
so far available today which utilises all four parameters of Patterns, Size, Damage
and Wear to identify individual footwear impressions, and compare them automatically with impressions from both a suspects database and a SoC database. Here,
the automation refers only to the progress of matching and searching of a database.
Actually, it still needs users to manually code shoemarks by patterns or other characteristics. One point different from other systems is that TREADMARKTM requires
that the user indicates the position of accidental or random characteristics on the
shoe sole. It records the positions of these characteristics and uses them to search
for other shoemarks with accidental characteristics in similar positions. More details
about TREADMARKTM are available at the website of CSI Equipment Ltd. (2006,
http://www.k9sceneofcrime.co.uk/systems.aspx).
7.4.6 SICAR
SICAR is one of the most successful commercial systems for shoemark archiving and classification/retrieval, developed by Foster and Freeman Ltd, London, UK.
It has been widely used in British police forces and forensic laboratories in the
UK. The most recent version is SICAR 6, which is claimed to be able to archive
shoemarks from both suspects and SoC. Combined with SoleMate a reference
database of shoemarks from shoemakers developed by the same company, SICAR
can be used to identify the information of a scene image, such as the manufacturers, the release date, and so on. Like other semi-automatic systems, this system requires an operator to classify the shoemark by assigning codes to individual
features in the shoemark. The classification is then stored in the database and can
be searched again later. SICAR adopts a simple coding technique to characterise
shoeprints which forms the basis of many of the database search and match operations. The process enables the operator to create a coded description of the pattern of a shoe sole by identifying elemental pattern features such as lines, waves,
zigzags, circles, diamonds and blocks, etc., each of which bears a unique code. Like
SHOE, this is a straightforward selection process, as each type of elemental pattern is displayed, with variants, for the operator to choose from. SICAR has also
been extended to other databases like tyre tread, (Foster and Freeman Ltd., 2008,
http://www.fosterfreeman.co.uk/sicar.html).
7.4.7 SmART
Although the Geradts attempt to develop an automated shoemark classification system was not successful, the work in this area continued. Alexander et al. in their
References
163
paper [1] propose a fully automated shoemark classification system. As the first
automatic shoemark classification system, SmART can automatically search against
a database of shoeprints. The authors apply fractal codes to represent a shoeprint,
and a mean square noise error method is used for determining the match results. The
algorithm has been tested on a database of shoemarks, containing 32 marks.
References
1. A. G. Alexander, A. Bouridane and D. Crookes, Automatic classification and recognition of
shoeprints, Special Issue of the Information Bulletin for Shoeprint/Toolmark Examiners, vol.
6. no. 1, pp. 91104, 2000.
2. G. Alexandre, Computerized classification of the shoeprints of burglars soles. Forensic
Science International, vol. 82, pp. 5965, 1996.
164
3. R. Ashe, R. M. E. Griffin and B. Bradford, The enhancement of latent footwear marks present
as grease or oil residues on plastic bags, Science and Justice, vol. 40 no. 3, pp. 183187, 2000.
4. W. Ashley, What shoe was that the use of computerised image database to assist in identification, Forensic Science International, vol. 82, pp. 6779, 1996.
5. W. J. Bodziak, Footwear Impression Evidence, New York: Elsevier, 1990.
6. W. J. Bodziak, Footwear Impression Evidence, Detection, Recovery, and Examination, 2nd
Edition. CRC Press, 2000, ISBN: 0-8493-1045-8.
7. Davis, M. (1998) Details on shoeprint photography provided in communication by email with
Davis, M., MSSC Regional Crime Lab, Joplin MO, US., Newton County Sheriffs Department, Neosho MO.
8. P. D. De Chazal, J. Flynn and R. B. Reilly, Automated processing of shoeprint images based
on the Fourier transform for use in forensic science, IEEE Transactions on Pattern Analysis
and Machine Intelligence, vol. 27, no. 3, pp. 341350, 2005.
9. Z. Geradts, Content-Based Information Retrieval from Forensic Image Databases, PhD Thesis, The Netherlands Forensic Institute of the Ministry of Justice in Rijswijk, The Netherlands,
2002.
10. Z. Geradts and J. Keijzer, The image-database REBEZO for shoeprints with developments
on automatic classification of shoe outsole designs, Forensic Science International, vol. 82,
pp. 2131, 1996.
11. A. Girod, Computerised classification of the shoeprints of burglars shoes, Forensic Science
International, vol. 82 pp. 5965, 1996.
12. H. Majamaa, Survey of the conclusions drawn of similar foorwear cases in various crime
laboratories, Forensic Science International, vol. 82, pp. 109120, 1996.
13. R. Milne, Operation Bigfoot a volume crime database project, Science and Justice, vol.
41, no. 3, pp. 215217, 2001.
14. T. J. Napier, Scene linking using footwear mark databases, Science and Justice, vol. 42, no.
1, pp. 3943, 2002.
15. N. E. Sawyer and C. W. Monckton, SHOE-FIT a Computerised Shoe Print Image Database.
IEE European Convention and Security and Detection, Brighton, UK, pp. 8689, 1995.
16. L. Zhang and N. M. Allinson, Automatic Shoeprint Retrieval System for use in Forensic
Investigations, 5th Annual UK Workshop on Computational Intelligence, 2005.
Chapter 8
165
166
(a)
(b)
(c)
(d)
8.2
167
qg1 g2 (x, y) = F
)
(8.1)
= F 1 e j((u,v)(u,v))
(8.2)
where F-1 denotes the inverse Fourier transform and G2 is the complex conjugate of
G2 . The term Q g1 g2 (u, v) = e j((u,v)(u,v)) is termed cross-phase spectrum between
g1 and g2 [7].
If the two images g1 and g2 are identical, their POC function will be a Dirac function centred at the origin and having the peak value 1. When matching similar
images, the POC approach produces a sharper correlation peak compared to the
conventional correlation as shown in Fig. 8.2.
(b)
(a)
0.2
0.15
2.38
0.1
2.375
0.05
0
2.37
-0.05
N/2
N/2 N/2
N/2
0
-N/2
-N/2
(c)
-N/2
-N/2
(d)
Fig. 8.2 (a) Original shoeprint image A. (b) Noisy partial shoeprint B generated from A. (c)
Phase-only correlation (POC) between A and B. (d) Conventional correlation between A and B
168
(8.3)
In the frequency domain, this will appear as a phase shift and a magnitude
scaling:
G 3 (u, v) = ae j2(x0 u+y0 v) G 2 (u, v)
(8.4)
According to (8.1), (8.2) and (8.4), the POC function between g1 and g3 is given
by
qg1g3 (x, y) = F 1 e j2(x0 u+y0 v) e j((u,v)(u,v))
(8.5)
= qg1g2 (x x0 , y y0 )
(8.6)
Equation (8.6) shows that the POC function between g1 and g3 is only a translated
version of the POC function between g1 and g2 . The two POC functions have the
same peak value which is invariant to translation and brightness change.
u 2 + v2
2 +v2
2 2
(8.7)
where is a parameter which controls the function width and is used for normalisation purposes. Thus, the modified phase-only correlation (MPOC) function
q g1 g2 (x, y) of images g1 and g2 is given by
8.2
169
1
1
0.9
0.9
0.8
0.8
0.7
0.7
0.6
0.6
0.5
0.4
0.5
0.3
0.4
0.2
0.1
0.3
0
N/2
0.2
N/2
0.1
0
0
-N/2
0
-N/2
N/2
-N/2
(a)
(b)
Fig. 8.3 The proposed band-pass-type spectral weighting function with =50. (a) 3D representation. (b) 2D representation
q g1 g2 (x, y) = F 1
)
(8.8)
The peak value of the MPOC function q g1 g2 (x, y) is also invariant to translation
and brightness change.
g1
FFT
G1
G1/ |G1|
~
Qg1g2 Qg1g2
Database image
g2
Input image
e j
FFT
G2
G2/ |G2|
e j
W
Weighting
IFFT
q~g1g2
Peak value
Matching
score
170
1. Calculate the Fourier transform of gi and gn using the FFT to obtain Gi and Gn .
2. Extract the phases of Gi and Gn and calculate the cross-phase spectrum Q gn gi .
gn gi by modifying Q gn gi using the
3. Calculate the modified cross-phase spectrum Q
spectral weighting function W.
gn gi using the Inverse FFT (IFFT) to
4. Calculate the inverse Fourier transform of Q
obtain the MPOC function q gn gi .
5. Determine the maximum value of q gn gi . This value will be considered as the
matching score between images gi and gn .
The use of the band-pass-type weighting function W (defined in Eq. (8.7)) will
eliminate meaningless high frequency components without significantly affecting
the sharpness of the correlation peak (since very low frequency components will be
also attenuated).
In this work, the peak value of the MPOC function has been considered as the
similarity measure for image matching: if two images are similar, their MPOC function will give a distinct sharp peak, if they are dissimilar, then the peak drops significantly.
After matching the input image gi against all database images, using the algorithm described above, the resulting matching scores are used to produce a list of l
shoeprints (l<<M) from the database, ranked from the best match (with the highest
matching score) to the worst (the lowest matching score). This list can be reviewed
later by a forensic scientist to determine the correct match visually.
8.2
(a)
171
(b)
(c)
Fig. 8.5 Examples of generated test images and their original shoeprint. (a) Original shoeprint
image. (b) Noisy partial-print ( = 80). (c) Blurred partial-print (L = 40)
caused by foot slippage in the real world. The MATLAB functions fspecial and
imfilter were used to generate the blurred images.
Set4- contains 1600 rotated partial shoeprint images obtained by digitally
rotating each partial shoeprint image from set1 by an angle ( = 2.5 , 5 ,
7.5 , 10 ).
During the evaluation process, each test image was used as input to the algorithm and matched against all 100 original images and the rank of the correct match
determined. This process was performed 5200 times. Then, for each type of perturbations, the proportion of times during tests a correct match appeared first (first rank
recognition) is determined.
In order to compare the method to the PSD-based algorithm [4], since the
database used in [4] was not available, the PSD-based algorithm was implemented
and tested using the same procedure as the proposed method. The results obtained
are depicted in Table 8.1. MPOC and POC denote the POC algorithms with and
without the spectral weighting function, respectively. The parameters of the weighting function used during the tests are = 10, 20, 30, 40, 50 and 60, with =
4 4 (to normalise the maximum of the MPOC function to 1, when matching two
identical images). Only results corresponding to = 50 (the best value) are shown
in Table 8.1.
From these results, it can be seen that the phase-based algorithms (POC and
MPOC) outperform the PSD method even without the use of the spectral weighting
function. It can be also observed that the PSD-based algorithm is very sensitive to
blurring and rotation. For the phase-based approaches, the use of a weighting function (MPOC algorithm) introduced clear improvements in the recognition rate for
blurred and rotated partial-prints without affecting the performance of the method
when processing clean or noisy images. The best results were obtained for a weighting function with = 50, where 100% of the time a correct match was ranked first
for clean, noisy and blurred images.
However, the main disadvantage encountered so far with the POC-based method
is that it is not rotation invariant. Methods of addressing this issue may include
172
Table 8.1 First rank recognition rates (%) using PSD- [4], POC- and MPOC-based algorithms
Algorithms
PSD [4]
POC
MPOC
=50
Test images
Clean partial prints
96.25
100
100
=20
Noisy par- =40
tial prints =60
=80
L=10
Blurred
L=20
partial
L=30
prints
L=40
=2.5
Rotated
=5
partial
=7.5
prints
=10
95.75
93.5
88.5
76.5
28.5
11
13
13
60.25
100
100
100
100
100
100
100
97.75
96.75
100
100
100
100
100
100
100
100
98.75
35.5
25.5
19
43.25
27.5
15.75
52.75
29.75
21.25
the brute force approach. In this case, one MPOC function is required for each
possible orientation of a shoeprint, making the use of the method computationally
demanding. Another solution consists in using advanced correlation filters as presented in the following section.
8.3
Deployment of ACFs
173
(8.9)
where F1 denotes the inverse Fourier transform and Hm is the complex conjugate
of Hm .
Filter design
Correlation Filter hm
0.45
0.4
0.35
0.3
0.25
0.2
FFT
IFFT
0.15
0.1
0.05
0
N/2
N/2
0
Input Image
0
N/2
N/2
Correlation output
174
(a)
(b)
(c)
0
N/2
N/2
N/2
N/2
0
0
N/2 N/2
0
N/2 N/2
(d)
(e)
Fig. 8.7 (a) Shoeprint image A. (b) Shoeprint image B. (c) Noisy rotated partial-print C generated
from A. (d) Correlation between C and a correlation filter designed using image A and its rotated
versions. (e) Correlation between C and a correlation filter designed using image B and its rotated
versions
The correlation output cm (x,y) is searched for the largest value (correlation peak)
and the height of the peak, as well as other metrics such as Peak-to-Correlation
Energy (PCE) or Peak-to-Sidelobe Ratio (PSR), are computed and used as the
matching score related to the class m.
For a well-designed correlation filter, it is expected that its cross correlation with
an input test image will produce a distinct sharp peak if the input image is similar to
the training images used to synthesise the filter (as shown in Fig. 8.7). Furthermore,
if the test image is translated with respect to the training images, the correlation
peak will be also translated by the same amount. Of course, there will be no large
distinct peaks in the correlation output, if the input image and the training ones are
dissimilar.
After cross correlating the input image with all stored filters, as described above,
the resulting matching scores are used to produce a list of l shoeprints from the reference database (where l<<M and M is the size of the reference database), ranked
from the best match (with the highest matching score) to the worst (the lowest
matching score). This output list of candidates can be reviewed later by a forensic expert to determine the final match visually.
8.3
Deployment of ACFs
175
PC E =
|c(x0 , y0 )|2
P H2
=
Energy
|c(x, y)|2
x
(8.10)
where (x0 ,y0 ) and PH indicate the peak location and the largest value in the correlation output, respectively.
P H = max {|c(x, y)|} = |c(x0 , y0 )|
x,y
(8.11)
P H A,B
A,B
(8.12)
where A,B and A,B are the mean and standard deviation, respectively, which are
computed in the sidelobe area: an annular region around the peak (as shown in
Fig. 8.8).
PH is the most widely used metric mainly due to its computation simplicity.
However, it will change if the illumination in the input image changes. The PCE
and the PSR parameters measure the sharpness of the correlation peak. They are
expected to give better classification performance than the PH, since they are computed using multiple points from the correlation output. Further, unlike the PH metric, they are insensitive to uniform brightness changes (uniform amplification or
attenuation) of the input image.
Sidelobe Area
Peak
Peak Area
176
d1 1 d
2 1
1
|Hm (u, v)|2 |Si (u, v)|2 = h m + Di h m
d u = 0v = 0
(8.13)
(8.14)
(8.15)
N
1
Di is a d d diagonal matrix containing the average power
N i=1
spectrum of the training images placed along its diagonal. Minimising the ACE
measure will provide sharp correlation peaks, thereby making peak detection and
location relatively easily [13].
where D =
8.3
Deployment of ACFs
177
If the noise in the input images is of zero mean, additive and stationary, then the
ONV [15] is as follows:
O N V = h m + Ch m
(8.16)
where C is a d d diagonal matrix containing the elements of the input noise power
spectral density along its diagonal. In many applications where noise power spectral
density is unknown a good model uses white noise which assumes C = I (i.e. the
identity matrix).
The OTSDF filter [15] finds a compromise between reducing the ACE and reducing the ONV by minimising the following energy function:
E(h m ) = (AC E) + (O N V ) = h m + ( D + C)h m
(8.17)
where 2 + 2 = 1 and 0 , 1.
Finally, and by using the method of Lagrange multipliers as described in [13],
the OTSDF filter hm that minimises the energy function E(hm ) while ensuring that
the correlation outputs at the origin take pre-specified values is given by
h m = T 1 S(S + T 1 S)1 w
(8.18)
(8.19)
178
|AC H |2
hm +ms ms +hm
=
AC E + O N V
h m + ( D + C)h m
(8.20)
1
*
D + 1 2C
ms
(8.21)
Similar to an OTSDF filter, varying the value of the parameter in Eq. (8.21)
allows us to trade-off between the correlation peak sharpness and noise tolerance of
the UOTSDF filter. Additionally, by comparing Eqs. (8.18) and (8.21), one can note
that Eq. (8.21) is simpler to implement since it only involves inverting a diagonal
matrix, while Eq. (8.18) requires the inversion of a N N matrix, which makes the
UOTSDF filter more attractive from a computational standpoint.
8.4
Conclusion
179
Table 8.2 Rank-1 recognition rates (%) using ACFs-, MPOC- and PSD [4]-based methods for
different alterations
ACFs
Methods
PSD
MPOC
OTSDF
[4]
UOTSDF
=50
Test Images
PH
PCE
PSR
PH
PCE
PSR
96.25
100
91.75
99.75
100
99.5
100
100
Set2:
= 20
95.75
100
90.5
99.5
100
99.5
100
100
Noisy
= 40
93.5
100
89.5
99
100
99.25
99.75 100
= 60
88.5
100
85.75
97.25
100
98.25
98.5
100
= 80
76.5
100
75.75
95
99.25
95.75
97.5
100
Set3:
L = 10
28.5
100
12.5
41.25
84.75
22
62.5
92.5
Blurred
L = 20
11
100
15
49.25
25
63.5
L = 30
13
100
3.75
10
30.75
12.7 5
42.25
L = 40
13
100
partial
partial
Set4:
= 2.5 60.25
Rotated
= 5
partial
3.5
7.5
98.75
72.5
97
35.5
52.75
94.25
99.75
= 7.5 25.5
29.75
76
97.75
= 10
19
21.25
96
50.65
84.80
61.28
Overall average
100
73.75
19.25
99.5
100
99.5
100
83.25
4.25
28.5
85.5
96.25
99.5
97.25
99.75 100
87.5
97
93.25
99.75 100
68.84
76.75
99.75
86.61
shoeprints. As expected, both filters provide better results than the MPOC method
when matching rotated shoeprints: the filters had recognition rates of over 99% for
all rotated shoeprints when using the best metric. It can also be observed that the
performance of the UOTSDF filter is generally better than that of an OTSDF filter.
The best overall performance was obtained when using the unconstrained filter with
the PSR metric, where 86.61% of the time a correct match was ranked first for all
the 5200 test images.
8.4 Conclusion
In this chapter, we have described how correlation-based techniques can be used
for automatic shoeprint classification problems. In particular, we have demonstrated
that the MPOC method and ACFs (i.e. OTSDF and UOTSDF filters) can be successfully used for classifying low- quality partial shoeprints. The two approaches
are shift invariant and have high noise immunity. In the noisy case, 100% of the time
a correct match was ranked first for all noisy test images when using the MPOC or
the UOTSDF-PSR methods.
Furthermore, the MPOC method is very robust to blurring distortions but sensitive to small rotations. The ACFs on the other hand, are less robust to blurring than
the MPOC but are tolerant to small rotations.
180
Both MPOC and ACFs-PSR outperform the PSD-based method regardless of the
degradations type. Finally, as future work, we propose to extend these methods on
larger databases and to investigate the use of other more advanced correlation filters.
Shoeprints alignment and/or enhancement before the matching process can also be
considered.
References
1. G. Alexandre, Computerized classification of the shoeprints of burglars soles, Forensic
Science International, vol. 82, pp. 5965, 1996.
2. Z. Geradts and J. Keijzer, The image data REBEZO for shoeprint with developments for
automatic classification of shoe outsole designs, Forensic Science International, vol. 82, pp.
2131, 1996.
3. A. G. Alexander, A. Bouradine and D. Crookes, Automatic classification and recognition of
shoeprints, Special Issue of the Information Bulletin for Shoeprint/Toolmark Examiners, vol.
6. no. 1, pp. 91104, March 2000.
4. P. D. Chazal, J. Flynn and R. B. Reilly, Automated processing of shoeprint images based on
the Fourier transform for use in forensic science, IEEE Transactions on Pattern Analysis and
Machine Intelligence, vol. 27, no. 3, pp. 341350, March 2005.
5. L. Zhang and N. M. Allinson, Automatic Shoeprint Retrieval System for use in Forensic
Investigations, 5th Annual UK Workshop on Computational Intelligence, 2005.
6. A. V. Oppenheim and J. S. Lim, The importance of phase in signals, IEEE Proceedings, vol.
69, no. 5, pp. 529541, 1981.
7. K. Takita, T. Aoki, Y. Sasaki, T. Higuchi and K. Kobayashi, High-accuracy subpixel
image registration based on phase-only correlation, IEICE Transactions on Fundamentals,
vol. E86-A, no. 8, pp. 19251934, August 2003.
8. B. V. K. V. Kumar, Tutorial survey of composite filter designs for optical correlators,
Applied Optics, vol. 31, pp. 47734801, 1992.
9. K. Venkataramani, S. Qidwai and B. V. kumar, Face authentication from cell phone camera images with illumination and temporal variations, IEEE Transactions on Systems, Man,
Cybernetics, vol. 35, no. 3, pp. 411418, August 2005.
10. P. Hennings, B. V. Kumar and M. Savvides, Palmprint classification using multiple advanced
correlation filters and palm-specific segmentation, IEEE Transaction on Information Forensics and Security, vol. 2, no. 3, pp. 613622, September 2007.
11. Y. Li, Z. Wang and H. Zeng, Correlation filter: an accurate approach to detect and locate
low contrast character strings in complex table environment, IEEE Transactions on Pattern
Analysis and Machine Intelligence, vol. 26, no. 12, pp. 16391644, December 2004.
12. E. Perez and B. Javidi, Nonlinear distortion-tolerant filters for detection of road signs in
background noise, IEEE Transactions on Vehicular Technology, vol. 51, no. 3, pp. 567576,
May 2002.
13. A. Mahalanobis, B. V. Kumar and D. Casasent, Minimum average correlation energy filters,
Applied Optics, vol. 26, pp. 36333640, 1987.
14. A. Mahalanobis, B. V. Kumar, S. R. F. Sims and J. F. Epperson, Unconstrained correlation
filters, Applied Optics, vol. 33, pp. 37513759, 1994.
15. B. V. Kumar, D. Carlson and A. Mahalanobis, Optimal trade-off synthetic discriminant function filters for arbitrary devices, Optics Letters, vol. 19, no. 19, pp. 15561558, 1994.
16. A. Mahalanobis, B. V. K. V. Kumar and S. R. F. Sims, Distance classifier correlation filters
for multi-class target recognition, Applied Optics, vol. 35, pp. 31273133, 1996.
17. M. Alkanhal and B. V. K. V. Kumar, Polynomial distance classifier correlation filter for
pattern recognition, Applied Optics, vol. 42, pp. 46884708, 2003.
Chapter 9
9.1 Motivations
Currently, only a very few advanced techniques on shoeprint image noise reduction,
robust thresholding (segmentation) and pattern descriptors have been proposed for
use in shoeprint image matching and retrieval. However, a number of existing and
elegant techniques of pattern (shape) descriptors have been ruled out due to the
difficulty of the segmentation, i.e. to separate shoe mark profiles from backgrounds,
and to further separate patterns in a profile from each other.
Local image features are computed from distinctive local regions and do not
require a priori segmentation. They have proved to be very successful in applications such as image retrieval and matching [6, 9, 15, 22], object recognition and
classification [5, 12, 14, 18, 21] and wide baseline matching [24]. Consequently,
many different scale and affine invariant local feature detectors, robust local feature descriptors and their evaluations have been widely investigated in the literature
[1, 2, 4, 8, 12, 13, 16, 17, 19, 20, 22].
This chapter is concerned with the retrieval of scene-of-crime (or scene)
shoeprint images from a reference database of shoeprint images by using a new
local feature detector and an improved local feature descriptor. Similar to most other
local feature representations, the proposed approach can also be divided into two
stages: (i) a set of distinctive local features is selected by first detecting scale adaptive Harris corners where each corner is associated with a scale factor. This allows
for the selection of the final features whose scale matches the scale of blob-like
structures around them and (ii) for each feature, an improved Scale Invariant Feature Transform (SIFT) descriptor is computed to represent it. Our investigation has
led to the development of two novel methods which are referred to as the Modified
HarrisLaplace (MHL) detector and the Modified SIFT descriptor, respectively.
181
182
stated that the HarrisAffine [17] and HessianAffine [20] detectors provide more
features than other detectors. This can be particularly useful when matching scenes
with occlusion and clutter, though the Maximally Stable Extremal Region (MSER)
detector [13] achieves the highest score in many cases in terms of repeatability. In
our case, an affine invariant local feature refers only to the translation-, rotation- and
scale-invariant (covariant) local regions. We shall not consider other general affine
or even perspective invariant cases, which rarely happen in the case of shoeprint
images. Furthermore, K. Mikolajczyk and Schmid in [16, 19] have evaluated the
performance of ten state-of-the-art local feature descriptors in the presence of real
geometric and photometric transformations. They have claimed that an extension of
the SIFT descriptor [11, 12], called Gradient Location and Orientation Histogram
(GLOH) [20] performs slightly better than the SIFT itself while both outperform the
other descriptors. The authors have also suggested that local feature detectors, such
as HessainAffine and HessianLaplace, which mainly detect the blob-like structures, can only perform well with larger neighbourhoods. However, this conflicts
with one of the basic properties of local image features the locality.
Typically, a local image feature should have four properties: locality, repeatability, distinctiveness and robustness against different degradations. The above studies
suggest that none of the current local feature representation can outperform others
in terms of all the above four properties. Therefore an efficient local feature representation should be a trade-off of these properties. The work described in this
chapter aims to firstly detect a set of distinctive local features from an image by
combining a scale adaptive Harris corner detector with an automatic Laplace-based
scale selection. Here, the location of the features is determined by the scale adaptive
Harris corner detector where the characteristic size of a local feature depends on
the scale of the blob-like structure around this corner, which is determined by the
automatic Laplace-based scale selection. Then, for each local feature, an improved
SIFT descriptor is computed to represent the feature. This descriptor actually further
enhances the GLOH method by using a circle binary template for rotation invariance, and by binning the SIFT histogram into a range of 180 rather than the original
360 for complement image robustness. Finally, the matching of the descriptors is
carried out by combining nearest neighbour measure with threshold-based screening; two descriptors match only if one is the nearest neighbour of the other (i.e. they
are distant by a value smaller than a threshold). Then, the distance between two
shoeprint images is computed from the matched pairs.
9.2
183
these local structures have a characteristic size. Mikolajczyk and Schmid [15] have
extended the Harris corner detector to a multiscale form to detect the corners at
different scales [7]. In an earlier work [10], Lindeberg has presented in detail a feature detector with an automatic scale selection where a Laplace of Gaussians (LoG)
transform has been demonstrated to be successful in scale selection of blob-like
structures. Likewise in [15], the authors have proposed a new HarrisLaplace detector by exploiting (i) the high accuracy of location of a Harris corner detector and
(ii) the robust scale selection of the LoG detector. However, the way in which they
are combined does not necessarily result in an accurately located and stable scaled
local feature detector, since the detector is actually required to determine when the
response of the Harris measure reaches a maximum in the spatial domain and so
does the response of the LoG at the same location but in the scale direction. In most
cases, the unstable component of such a detector is related to the scale selection
since the stability of a scale selection based on LoG is conditional upon this measure being computed at the centre of a blob structure, rather than at locations with
the Harris maxima. In this section, we will propose a solution to this problem. Following the name of the HarrisLaplace detector, we call this detector a Modified
HarrisLaplace detector.
9.2.1.1 Modified HarrisLaplace (MHL) Detector
A scale adaptive Harris detector is based on an extension of the second moment
matrix of Eq. (9.1), where, i , d and f are the integration scale, the differentiation scale and the derivative computed in the direction of , respectively [15].
The strength of a scale adaptive corner can be measured by det( A(x, i , d ))
t r ace2 ( A(x, i , d )).
f 2x (x, d ) f x f y (x, d )
f y f x (x, d ) f 2y (x, d )
LoG(x, ) = 2 L x x (x, ) + L yy (x, )
A(x, i , d ) = d2 g(i )
(9.1)
(9.2)
184
a blob structure (red cross) and a corner (blue cross) over the scales on two synthetic
128 128 images, where the sizes of white squares for (a) and (b) are 11 11 and
21 21, respectively. The maxima of the red curves clearly reflect the scales of the
white square in (a) and (b). The figure also illustrates why the scale selection of
the HarrisLaplace detector is unstable, i.e. the middle curves (blue) have too many
extrema, thus leading to redundant and unstable scales.
It is also noted from Fig. 9.1 that the scale of a blob structure (red circle) selected
by LoG can be related to the scale of the corners around the structure of this blob.
Actually, in most cases, it is reasonable to assume that a corner can be associated
with a blob structure around this corner. Based on this assumption, only those candidate corners whose scale has a predefined relationship with the scale of a blob
structure around them can be selected as a characteristic scale of that corner. There
are two factors which should be considered for this strategy: (i) the search region
and (ii) the relationship between the scale of the blob structure and that of the corner. Figure 9.3 illustrates this strategy, where the red solid circle (r ad i us = r)
denotes a blob structure while the red dashed circles denote the search region with
the radius of r 1 and the reference circle with the radius of r 0 . The green circles are
the candidates of the same corner located at the top-left of the square, and the blue
circle represents the selected characteristic scale of this corner. (here only the candidate scale whose value is nearest to the reference radius (r 0 ) will be selected as
the characteristic scale of the corner). In all of our experiments, Eq. (9.3) is applied
to relate the reference scale r 0 , the search region r 1 with the blob scale r.
2 r0 = r =
2
r1
2
(9.3)
9.2
185
1
0.9
0.9
0.8
0.8
0.7
0.7
0.6
0.6
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
10
15
10
15
(a)
1
0.9
0.9
0.8
0.8
0.7
0.7
0.6
0.6
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
10
15
10
15
(b)
9.2.1.2 Repeatability Evaluation
The repeatability score of a local feature detector is given by computing the ratio
between the number of correct matches and the smaller number of features detected
in one of the images. A typical definition of repeatability has considered the overlap error, which is defined as the error of the corresponding features in terms of
area [20]. Two features are claimed to correspond if the overlap error is less than
a predefined threshold. Here, to put our detector into the context of the studies
in this problem area, we apply the codes and the benchmark images from [5] to
186
r0
r1
measure the repeatability of detectors, and compare the performance of our detector
with three other similar detectors (Fig. 9.4), namely Harris Laplace (HarLap), Hessian Laplace (HesLap) and Harris Hessian Laplace (HarHesPal) detectors. Two sets
of images (Boats and Bikes) have been tested in this experiment, each containing
six images with either scale decreasing (zoom out) or blur increasing. For each set
of image sequence, five repeatability scores have been computed between the first
image and the reminders, and the results are shown in Fig. 9.4. It is worth noting
that Harris and Hessian measures can detect two different structures (corner-like
and blob-like). Therefore, by simply combining them one can build another detector, referred to as the HarrisHessianLaplace detector, which considers the most
significant responses of the Harris and Hessian measures, since the spatial location
of a feature, while the scale is still determined by LoG measure.
Figure 9.4 suggests that, in most cases, our proposed detector outperforms the
other three in terms of repeatability. Here, it should be noted that we have limited
the number of the raw features to under 400 by a universal significance measure of
a feature (this measure is defined as the multiplication of the response of LoG and
the area of the local region).
Figure 9.5 shows an example of image matching based on the proposed MHL
detector. The local feature descriptor and the matching strategy used in this matching
process will be detailed in the following sections. The main transformations between
two images are a scale change (scale ratio = 2.8) and an in-plane rotation [20]. In
this example, 23 out of 32 matches are correctly computed thus outperforming the
HarrisLaplace detector (where only 6 out of 26 matches are correctly computed),
provided that all other conditions are same.
9.2
187
70
HarLap
HesLap
HarHesLap
MHL
60
repeatability (%)
50
40
30
20
10
1.5
2.5
3
3.5
scale change
4.5
(a)
85
HarLap
HesLap
HarHesLap
MHL
80
repeatability (%)
75
70
65
60
55
50
45
1.5
2.5
3.5
3
increasing blur
4.5
(b)
Fig. 9.4 The repeatability comparison of four detectors. (a) is on the images with scale and
rotation changing. (b) is on the images with increasing blur. (Referring to the Boat and the Bike
images from [5])
188
Fig. 9.5 Matching result of two images, 23 out of 32 matches are correct. (Refer to the Boat
images from [5])
(i) First, we apply a circular binary template on each normalised local region to
increase the rotation invariance of the descriptor. Both SIFT and GLOH obtain
the rotation robustness by weighting the local region with a Gaussian window.
However, very often, when one chooses a larger sigma for the Gaussian kernel,
the descriptors computed for the region are distinctive but rotation sensitive.
On the other hand, when one chooses a smaller sigma for the Gaussian kernel,
the descriptors are rotation robust but not distinctive. In most cases, it is hard
to choose a proper sigma. In this work, we apply a binary template to limit the
region to a circular one, and meanwhile use a larger sigma for the Gaussian
window to keep the distinctiveness of the descriptors.
(ii) Second, for complement image robustness, we bin the histogram with the orientation range of 180 rather than the original 360 . In our application of shoeprint
image matching, the complement image robustness is very important, since
often the query shoeprint image from scenes of crime is the complement of
the shoeprint image in the reference database. Complement robustness can be
easily obtained by binning the histogram with the orientation range of 180 , i.e.
without considering the polarity of the gradients.
The construction of our local descriptors is similar to GLOH, i.e. we bin the gradients in a log-polar location grid with three bins in the radial direction and four bins
in the angular direction (the central grid does not apply angular binning), resulting
in a nine location grid. Noting the orientation range of 180 , four bins are applied
in the gradient orientations. Finally, the descriptors of an image comprise a N 36
(4 9) matrix, where N is the number of the local features detected in the working
image.
9.3
Experimental Results
189
thresholding jointly to compute the distance between two images, i.e. for each
descriptor in one image, the nearest neighbour in another image is found as a potential match, then only those matches whose distance is below a threshold are selected
as the final matches. The similarity of two images is computed from the summation
of exp(d), where d denotes the distance of a match. Of course, there are many
other strategies for computing the similarity or matching score between two images.
The example of image matching in Fig. 9.5 applies the nearest neighbour to obtain
the initial matches, and then the RANSAC (Random Sample Consensus) algorithm
is used to reject mismatches. RANSAC is a general algorithm for robustly fitting
models in the presence of many data outliers. Here, the model is a 3 3 fundamental matrix. The final matches/correspondences are shown in Fig. 9.5.
190
BC
S
L
BP
Fig. 9.6 S sample, L line, BC complete boundary, BP partial boundary. Left image
on the second row is a complete one, while right is a partial one
length of the complete boundary. Then several random points around the line are
selected as the samples of the partial boundary. (Figure 9.6 illustrates complete
and partial boundaries.) With these samples, a spline interpolation is applied to
produce the partial boundary. The pixels on one side of the curve are set to 1 or 0
thus producing a partial mask, which is then used to generate a partial shoeprint.
Five partial shoeprint images are generated for each of shoeprint in the dClean
data set. The percentage of the partial shoeprint which remains is varied from
40% to 95%. An illustration of the partial shoeprint creation and one example
are shown in Fig. 9.6.
Rescaled shoeprint data set: dRescale
This data set, termed dRescale, consists of 2500 rescaled prints where each
shoeprint in dClean has been rescaled with five random scale ratios in the range
of 0.350.65. Here, we did not use a scale ratio larger than 1.0, because (i) upsampling always has a similar influence to down-sampling on the scale robustness of an approach; and (ii) the original size of the shoeprint images in dClean is
large enough, so any expansion will bring much trouble in the feature extraction
computation.
Rotated shoeprint data set: dRotate
This data set, called dRotate, is used to test algorithms for rotation invariance
and consists of 2500 rotated prints. Each shoeprint in the base data set has been
rotated with five random orientations in the range of 0 90 . The selection of this
range (rather than 0 360 ) is based on the fact that the algorithms developed in
this thesis are flip invariant both in horizontal and vertical directions.
9.3
Experimental Results
191
192
PSD, one needs to first down-sample an input image, and second take 2D DFT on
the down-sampled image; then the power spectral distribution is computed. Finally,
a masking step is taken to obtain the signature. In the similarity computation, the
9.3
Experimental Results
(a)
193
(b)
(c)
Fig. 9.8 Examples of synthetic scene shoeprint images; the images of (a), (b) and (c) correspond
to (scale + scene), (scene+complex), and (pattern + scene), respectively
194
dNoise
1
0.9
1
0.8
0.7
0.95
0.6
0.9
0
0.5
10
15
0.4
EDH
PTS
0.3
PSD
LIF
RAND
0.2
0.1
0
20
40
60
80
100
percentage
dPartial
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
EDH
PTS
PSD
LIF
RAND
0.2
0.1
0
20
40
60
percentage
80
100
9.3
Experimental Results
195
dRecale
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
EDH
PTS
PSD
LIF
data5
0.2
0.1
0
20
40
60
80
100
percentage
dRotate
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
EDH
PTS
PSD
LIF
RAND
0.2
0.1
0
20
40
60
percentage
80
100
196
dScene
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
EDH
PTS
PSD
LIF
RAND
0.2
0.1
0
20
40
60
80
100
percentage
dComplexDegrade
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
EDH
PTS
PSD
LIF
RAND
0.2
0.1
0
20
40
60
80
100
percentage
Fig. 9.9 Performance evaluation of four signatures (EDH, PTS, PSD and LIF) in terms of cumulative matching score on six degraded image data sets. RAND here is the worst case, i.e. the rank
of the images in the reference data set is randomly assigned
9.3
Experimental Results
197
d = 0.0304
d=0.0149
d = 0.0061
d = 0.1116
d = 0.1255
d = 0.1434
d = 0.3347
d = 0.3347
d = 0.0066 d = 0.0100
d = 0.0107
d = 0.2511
d = 0.1434
d = 0.3348
d = 0.0120
Fig. 9.10 Examples of shoeprint image retrieval. In each row, the leftmost image is a noisy
query shoeprint from dComplexDegrade, and the rest of the row shows the top ranked shoeprint
images in dClean. The distance is shown under each retrieved image, and the red squares denote
the corresponding patterns contained in the query images
198
EDH
PTS
PSD
LIF
Signature size
72
120
24,099
19,131
PSD of a query image has to be rotated 30 times, with a 1 step, and the largest
similarity value is considered over the 30 rotated versions. In our experiments, the
step of rotation is removed because it is computationally intensive and also a rotation
range of 30 is not suitable in most practical situations.
Pattern and Topological Spectra (PTS; Su et al. [23]) this method considers the problem of automatic classification of noisy and incomplete shoeprint
images and employs the principle of topological and pattern spectra. A Topological Spectrum for a shoeprint image, based on repeated open operations with
increasing size of structuring elements, giving a distribution of Euler numbers
is computed. The normalised differential of this produces the topological spectrum and a hybrid algorithm, which uses a distance measure based on a combination of both spectra as the feature of a shoeprint image, is proposed and applied
successfully.
The above results suggest that
(i) For the data sets degraded with Gaussian noise, cutting-out, and rescaling, the
signatures of PSD and LIF can achieve almost perfect results. Further, LIF can
achieve similar performance for the data sets degraded by rotation and scene
background addition.
(ii) The performance of EDH and PTS is marginally worse than that of PSD and
LIF for the degradations of Gaussian noise, cutting-out, rescaling and rotation.
However, both methods are efficient, noting that the cost (signature size) of the
two signatures is significantly smaller than for the other two. Further it can be
observed that PTS outperforms EDH in most cases (with the exception on the
rescaled database).
(iii) The signature of LIF works very well for all kinds of degradations. It clearly
outperforms other signatures on the data set with the most complex degradations. However, LIF is more computationally intensive than both EDH, and
PTS. (For instance, it takes about 40 sec, on average, to compute the LIF of
a shoeprint image with the size of 768 280 on our machine Pentium 4
CPU 2.40 GHz, 760 MB of RAM, while it takes less than 1 sec and around
2 seconds for computing the EDH and PTS of an image with the same size,
respectively).
Two further shoeprint matching examples based on local features are given in Fig.
9.11. The synthetic scene images contain degradations of rotation, rescaling, pattern
segmentation and scene addition. It can be seen from Fig. 9.11(a,b) that more than
80 percent of feature matches are correct.
9.4
Summary
199
(a)
(b)
9.4 Summary
This chapter has discussed a local feature detector (Modified HarrisLaplace detector) which employs a scale-adaptive Harris corner detector to determine the local
feature candidates and a Laplace-based automatic scale selection strategy in order
200
to select the final local features. We have further improved the widely used local feature descriptors to be more robust to rotation and complement operations. To assess
the performance of the system, a set of synthetic scene shoeprint images (modelling
real world degradations) were used. A number of experiments on shoeprint image
matching and retrieval were also conducted.
The experimental results have indicated that (i) compared with the Harris
Laplace detector, the Modified HarrisLaplace detector provides more stable local
regions, (ii) the local image descriptors perform significantly better than the global
descriptors on shoeprint image matching and retrieval.
Further issues to be investigated include
to further reduce the dimensions of a local feature descriptor. Even though we
have taken some measures to reduce this, such as applying 36 dimensions instead
of the original 128 dimensions, and limiting the number of detected local features
to under 400;
to develop a fast and accurate matching strategy to deal with a large shoeprint
image database. The current matching based on nearest neighbour and thresholding is fast but not accurate enough, while a more accurate matching strategy
based on RANSAC is computationally intensive;
to develop more advanced local feature detectors and descriptors;
to extend the evaluation of shoeprint image retrieval and matching using local
image features with real scene images.
References
1. H. Bay, T. Tuytelaars, and L. V. Gool, SURF: Speed-Up Robust Features, ECCV06,
pp. 404417, 2006.
2. G. Carneiro, and A. D. Jepson, The distinctiveness, detectability, and robustness of local
image features, CVPR05, vol. 2, pp. 296301, 2005.
3. P. D. deChazal, J. Flynn and R. B. Reilly, Automated processing of shoeprint images based
on the Fourier transform for use in forensic science, IEEE Transactions on Pattern Analysis
and Machine Intelligence, vol. 27, no. 3, pp. 341350, 2005.
4. G. Dorko and C. Schmid, Maximally stable local descriptor for scale selection, ECCV06,
pp. 504516, 2006.
5. R. Gal and D. Cohen-Or, Salient Geometric Features for Partial Shape Matching and Similarity, ACM Transactions on Graphics, vol. 25, no.1, pp. 130150, 2006.
6. V. Gouet and N. Boujemaa, Object-based queries using colour points of interest, IEEE
Workshop on Content-Based Access of Image and Video Libraries (CVPR/CBAIVL) Hawai,
USA, 2001.
7. C. Harris and M. Stephens, A combined corner and edge detector, Alvey Vision Conference,
Manchester, UK, pp. 147151, 1988.
8. T. Kadir, A. Zisserman and M. Brady, An affine invariant salient region detector, ECCV04,
pp. 404416, 2004.
9. L. Ledwich and S. Williams, Reduced SIFT Features for Image Retrieval and Indoor Localisation, Australasian Conference on Robotics and Automation, Canberra, 2004.
10. T. Lindeberg, Feature Detection with Automatic Scale Selection, IJCV, vol. 30, no. 2,
pp. 77116, 1998.
References
201
Index
A
ACE, see Average correlation energy
ACH, see Average correlation height
Acquisition module, 1213
Advanced Correlation Filters (ACFs), in
shoeprints classification, 165,
172174
matching metrics in, 175
OTSDF filter, 176177
outcomes of, 178179
unconstrained OTSDF filter, 177178
See also Phase-Only Correlation (POC)
technique, in shoeprint classification
Alexandres shoemark system, 161
See also Shoemark classification systems
Alpha stable model, in watermark detection,
129130
See also Watermarking technique
Amplitude modulation watermarking method,
for fingerprint images, 123124
See also State-of-the-art
Appearance-based approaches
for feature extraction, 25
of recognition system, 7, 8
Aqueous humor, 52
ATR applications, see Automatic target
recognition applications
Augmented Gabor-face vector, 3233
Automated finger imaging systems (AFIS), 3
Automatic shoeprint classification techniques,
approaches, 165
ACF method in
matching metrics in, 175
OTSDF filter, 176177
outcomes of, 178179
unconstrained OTSDF filter, 177178
POC technique in
experimental outcomes of, 170172
POC function, 166167
203
204
Biometrics (cont.)
pre-processing techniques, 78
template matching, 9
verification (or authentication), 5
watch-list applications, 6
Birkett system, in shoemark feature
classification, 158159
See also Forensic science
1-Bit watermarking, role, 121
Blind watermarking, usage, 122
See also Generic watermarking system
C
Chaotic parameter modulation, 124
Chinese Academy of Sciences Institute of
Automation (CASIA) eye image
database, 65, 72
Chous method, of watermarking schemes,
8792
CMC, see Cumulative matching curve
Combined multiresolution feature extraction
techniques, 7273
Complex scene shoeprint image data set, 191
See also Local image features (LIF)
Composite filters, see Advanced correlation
filters (ACFs), in shoeprints
classification
Contour detection process, 5455
Contrast masking, 86
Cornea, 52
Correct (or Genuine) identification rate (CIR),
5, 23
Correlation-based approaches, in shoeprints
classification, 165
Costas optimal random codebook, 96
Covert systems, 13
CPM, see Chaotic parameter modulation
Cumulative matching curve, 191
D
Data hiding method, for fingerprint
images, 123
See also State-of-the-art
Data storage, 14
Daugmans algorithm, 65
Daugmans Integro-Differential Operator,
5354
Daugmans method, for iris recognition, 7374
Daugmans Rubber sheet model, 64
DCCF, see Distance classifier correlation filter
DComplexDegrade, see Complex scene
shoeprint image data set
DCT, see Discrete Cosine Transform
De Chazals system, 163
Index
See also Shoemark classification systems
Denial of service, 118
DFT, see Discrete Fourier Transform
DGaussian, see Gaussian noise shoeprint
data set
Digital watermarking encoder and
decoder, 120
See also Generic watermarking system
Directional filter bank (DFB), 22, 3337
Directional images, 3637
Discrete circular active contour (DCAC),
5455
Discrete Cosine Transform (DCT), 79, 122
Discrete Fourier Transform (DFT), 79, 122
Discrete Wavelet Transform (DWT), 79, 122
Distance Classifier Correlation Filter, 172
DOS, see Denial of service
DPartial, see Partial shoeprint data set
DRescale, see Rescaled shoeprint data set
DRotate, see Rotated shoeprint data set
DScene, see Scene shoeprint data set
DWT, see Discrete Wavelet Transform
DWT coefficients, modelling, 132135
E
Edge directional histograms, 165, 191
Edge map detection, 5762
for iris boundaries, 62
EDH, see Edge directional histograms
Eigenfaces method, 21
Electrostatic shoemark lifting, 153154
See also Forensic science
Equal error rate (ERR), 18
Euclidean distance decoder, 96
Even Symmetry Gabor filters, 52
Exclusive OR (XOR) operation, 71
Eyelashes, isolation, 63
Eyelid regions, isolation, 63
F
Face detection, 23, 25
Face localisation, 23
Face recognition basics
application of Watch-List task, 23
with FERET database, 37, 4345
independent component analysis (ICA),
2728, 3941
linear discriminant analysis (LDA),
2829, 41
principal component analysis (PCA),
2627, 3839
recognition test, 2223
steps
feature extraction, 2526
Index
localisation and detection, 2324
matching, 26
normalisation and pre-processing,
2425
subspace discriminant analysis (SDA),
2931, 4143
using filter banks
directional filter bank, 3337
Gabor filter bank, 3133
verification test, 22
with YALE Face database, 37
Face recognition vendor test (FRVT), 22
Failure to capture rate (FCR), 18
Failure to enrol (FTE), 18
False acceptance rate (FAR), 4, 5, 18, 22, 118
False alarm rate, 6, 18
False identification rate (FIR), 5, 23
False rejection rate (FRR), 5, 18, 22
FAR, see False acceptance rate (FAR)
Fast Fourier Transform, 165
FastICA method, 28
Feature-based approaches
for feature extraction, 25
of recognition system, 8
Feature extraction process, 1516
categorisation, 26
Feature extractor algorithm, 16
Feature sets, of biometric systems, 1415
FERET database, 37, 4345
FFT, see Fast Fourier Transform
Fingerprint authentication, signature-based
watermarking technique for, 124
See also State-of-the-art
Fingerprint biometrics, 34
Fingerprint data protection, watermarking,
117119, 130131
DWT coefficients, modelling of, 132135
generic watermarking system, 119123
optimum watermaking detection, 124127
outcome of, 135138
state-of-the-art, 123124
statistical data modelling in, 127128
alpha stable model, 129130
laplacian and GGD models, 128129
Fingerprint images
DWT coefficients of, 132135
generic watermarking system for, 119123
verification, state-of-the-art for, 123124
Fingerprint recognition, 3
Fingerprint Verification Competition, 131
Fisherface algorithm, 28
Fisher Linear Discriminant (FLD), 28
Forensic science, shoemark collections
205
from crime scenes, 149150
casts making of shoemarks, 152153
computerised system, data entry
into, 157
electrostatic shoemark lifting, 153154
perfect shoemark scan in, 154155
photography of shoemarks, 151152
procedures of, 150
processing of shoemarks, 155156
recovery of shoemarks from snow, 154
shoemarks, gelatine lifting of, 153
suspects shoe, cast making in, 155
transfer/contact prints, 150151
limitations in, 143144
in applications of, 144146
automating shoemark classification,
146147
importable classification schema,
148149
inconsistent classification, 147148
shoemark processing time
restrictions, 149
shoemark recognition, methods in,
157158
accidental characteristics, classification
on, 159160
feature-based classification, 158159
Four-band DFB, 35
Fourier transform-based matchers, 16
Fourier transforms, 55
Frequency Domain Matchers, 16
FVC, see Fingerprint Verification Competition
G
Gabor filter bank (GFB), 21, 3133,
6770, 71
Gabor filter dictionary design, 32
Gabor wavelets function, 31
Gabor wavelet transformation, 32
Gait recognition, 4
Gaussian noise shoeprint data set, 189
See also Local image features (LIF)
Gaussian smoothing function, 54
Gelatine lifting, of shoemarks, 153
See also Forensic science
Generalised Gaussian distribution, 119
Generic biometric-based system, attacks
in, 118
Generic watermarking system, for fingerprint
images, 119123
Genuine individual type decision, 17
Geometrical alignment, of image, 78
GGD, see Generalised Gaussian distribution
206
GGD models, in watermark detection, 128129
See also Watermarking technique
Global Feature Extractors, 15
GLOH, see Gradient location and orientation
histogram
Gradient location and orientation
histogram, 182
H
Hamming distance matching algorithm, 51, 71
Hand geometry recognition, 4
HAS, see Human auditory system
Hough transform, 54, 62, 65
Human auditory system, 121
Human visual system, 120
HVS, see Human visual system
Hybrid approaches
for feature extraction, 25
of recognition system, 9
I
Image authentication, phasemarkTM
watermarking technique for, 124
Image enhancement, 15
Image formatting, 15
Image localisation methods, 67
Image registration and alignment, 15
Image segmentation, 15
Image size normalisation, 8
Imperceptibility analysis, in watermarking,
135136
See also Watermarking technique
Impostor type of decision, 17
Independent component analysis (ICA), 21,
2728, 3941
Integro-differential operator, 53
Iris recognition, 4
approach for iris segmentation
edge detector usingwavelets, 5557
multiscale method, 5765
localisation, 5255
research, 5152
texture analysis and feature extraction,
6771
J
Just-noticeable-distortion (JND) model, for
watermark embedding, 8687
K
KarhunenLoeve (KL) transformation, 27
Keystroke recognition, 4
KullbackLeibler (KL) divergence, role, 132
Kurtosis maximisation process, 28
Index
L
Laplace of Gaussians, 168, 183
Laplacian pyramid, 51
Learning neural network (LVQ), 51
Likelihood-ratio test, usage of, 125
See also Watermarking technique
Linear discriminant analysis (LDA), 21,
2829, 41
Local image features (LIF), in shoeprint image
detection,
181182, 191
experimental outcomes
degraded data sets, 189191
EDH and PSD techniques,
191192, 198
PTS method, 198199
image similarity measurement, 188189
local photometric descriptors, 186188
modified HarrisLaplace Detector,
182186
Local photometric descriptors, in shoeprint
image retrieval, 186188
Local singularities, 55
LoG, see Laplace of Gaussians
LoG-based automatic scale selection, usage,
183185
Loos model, of watermarking schemes, 9394
Luminance masking, 86
M
MACE, see Minimum Average Correlation
Energy filter
MACH filter, see Maximum Average
Correlation Height filter
Masking, 8
Matcher accuracy, 18
Matching process
in face recognition system, 26
of iris, 7172
Maximally stable extremal region, 182
Maximum average correlation height filter, 172
Maximum-likelihood (ML) scheme, 119
MHL detector, see Modified HarrisLaplace
detector
Minimum average correlation energy filter, 172
Minimum variance synthetic discriminant
function filter, 177
Modified HarrisLaplace detector, 181186
See also Local image features (LIF)
Modified phase-only correlation function, 168
Modulation transfer function (MTF), 88
MPOC funtion, see Modified phase-only
correlation function
MSER, see Maximally stable extremal region
Index
Multi-bit watermarking, role, 121
See also Generic watermarking system
Multichannel Gabor filters, 52
Multiscale edge detection, 5657
MVSDF filter, see Minimum variance synthetic
discriminant function filter
N
NDFT, see Non-uniform Discrete Fourier
Transform
Nearest feature line method, 52
Nearest neighbour (NN) method, 30
Clustering approach, 3031
Non-blind watermarking, usage of, 121
See also Generic watermarking system
Non-redundant complex wavelet transform
(NRCWT), 8386
Non-uniform Discrete Fourier Transform, 124
Normalisation process, of iris, 6365
O
ONV, see Output noise variance
Optimal Trade-off Synthetic Discriminant
Function filter (OTSDF filter), 172,
176177
Output noise variance, 176
Overt systems, 13
P
Partial shoeprint data set, 189190
See also Local image features (LIF)
Pattern and topological spectra, 191, 198
Pattern recognition-based matching, 16
PCE, see Peak-to-correlation energy
PDCCF, see Polynomial DCCF
Peak Signal-to-Noise ratio, 135
Peak-to-correlation energy, 174
Peak-to-Sidelobe ratio, 174
Perfect shoemark scan, 154155
See also Forensic science
PhasemarkTM watermarking technique, for
image authentication, 124
Phase-only correlation (POC) technique, in
shoeprint classification, 165
experimental outcomes of, 170172
POC function, 166167
shoeprint matching algorithm in, 169170
spectral weighting functions in, 168169
translation and brightness properties of
POC function, 168
See also Automatic shoeprint classification
techniques
POC function, defined, 166167
207
See also Automatic shoeprint classification
techniques
Polar transformation, of iris, 6365
Polynomial DCCF, 172
Power spectral density, 163, 165
Power spectral distribution, 191192, 198
Principal component analysis (PCA), 21,
2627, 3839
PSD, see Power spectral density; Power
spectral distribution
PSNR, see Peak Signal-to-Noise ratio
PSR, see Peak-to-Sidelobe ratio
PTS, see Pattern and topological spectra
Pupil, 52
Q
QIM, see Quantisation index modulation
QuantileQuantile (QQ) plot, role, 132
Quantisation index modulation (or QIM),
9697, 124
Quincunx downsampling, 35
R
Radial resolution, 64
Random Sample consensus algorithm, 189
RANSAC, see Random sample consensus
algorithm
Raw images, 14
REBEZO system, 161
See also Shoemark classification systems
Receiver operating characteristics, 136
Recognition tests, 5
Region of interest (ROI), 15
Relative entropy, definition, 132
Reply attack, defined, 117
See also Fingerprint data protection
Rescaled shoeprint data set, 190
See also Local image features (LIF)
ROC, see Receiver operating characteristics
Rotated shoeprint data set, 190
S
SAWGN attack, 107
Scale-adaptive Harris corner, in synthetic
image detection, 183184
Scale invariant feature transform, 181
Scene of Crimes Officer, 150
Scene shoeprint data set, 191
See also Local image features (LIF)
Sensor module, 13
SHOE-FIT system, 160
Shoemark classification systems
Alexandres system, 161
De Chazals system, 163
208
Shoemark classification systems (cont.)
REBEZO, 161
c 160161
SHOE,
SHOE-FIT, 160
SICAR, 162
SmART, 162163
TREADMARKTM , 162
Zhangs system, 163
Shoemark forensic evidence, limitations,
143144
in applications of, 144146
automating shoemark classification,
146147
importable classification schema, 148149
inconsistent classification, 147148
shoemark processing time restrictions, 149
Shoemark recognition, methods, 157158
accidental characteristics, classification on,
159160
feature-based classification, 158159
Shoemarks collection in, crime scenes,
149150
casts making of shoemarks, 152153
computerised system, data entry into, 157
electrostatic shoemark lifting, 153154
perfect shoemark scan in, 154155
photography of shoemarks, 151152
procedures of, 150
processing of shoemarks, 155156
recovery of shoemarks from snow, 154
shoemarks, gelatine lifting of, 153
suspects shoe, cast making in, 155
transfer/contact prints, 150151
Shoeprint image detection, local image
features, 181
experimental outcomes of, 189199
image similarity measurement, 188189
local photometric descriptors, 186188
modified Harris-Laplace detector, 182186
Shoeprint matching algorithm, in POC
technique, 169170
See also Automatic shoeprint classification
techniques
SICAR system, 162
See also Shoemark classification systems
SIFT, see Scale invariant feature transform
Signature-based watermarking technique, for
fingerprint authentication, 124
See also State-of-the-art
Signature recognition, 4
SmART system, 162163
See also Shoemark classification systems
SOCO, see Scene of Crimes Officer
Index
Spatial domain watermarking method, for
fingerprint images, 123
See also State-of-the-art
Spectral weighting functions, in POC
technique, 168169
See also Automatic shoeprint classification
techniques
State-of-the-art, for fingerprint image
verification, 123124
Subclass discriminant analysis (SDA), 21
Subspace discriminant analysis (SDA), 2931,
4143
T
Tans method, 73
Technology evaluation, 17
Template matching methods, of recognition
system, 7
Test data set, 17
Time Domain Matchers, 16
TREADMARKTM system, 162
See also Shoemark classification systems
Trojan horse programme, 117
U
Unconstrained OTSDF filter, 172, 177178
Unwrapping of iris, 64
UOTSDF filter, see Unconstrained OTSDF
filter
US Department of Defense Counter- drug
Technology Development
Program, 37
V
Valley-seeking algorithm of Koontz and
Fukunaga, 30
Verification tests, 5
Voice recognition, 4
W
Watch-list task, 6
Watermarking schemes, 79
Chous method, 8792
as communication with side information,
9498
decoding process, 100102
distortion compensated spread spectrum
(DC-SS), 110111
document-to-watermark-ratios (DWR), 110
encoding process, 99100
as game, 105109
general data hiding capacities in bits,
108109
Hybrid model, 94
Index
just-noticeable-distortion (JND) model for
watermark embedding, 8687
Loos model, 9394
Mean Squared Error (MSE) distortion
measure, 101
parallel Gaussian channels, 102105
proposed algorithm, 98100
Quantisation Index Modulation (or QIM),
9697
SAWGN attack, 107
spread transform, 9798
theoretical capacity limits of algorithms,
100113
total spread transform data hiding
capacities in bits, 108109
watermark-to-noise ratio (WNR)
advantage, 98
Watermarking technique, in fingerprint data
protection, 117119, 130131
DWT coefficients, modelling of, 132135
generic watermarking system, 119123
optimum watermaking detection, 124127
209
outcome of, 135138
state-of-the-art, 123124
statistical data modelling in, 127128
alpha stable model, 129130
laplacian and GGD models, 128129
Wavelet maxima components, 68, 70
Wavelets-based matchers, 16
Wavelet Scalar Quantisation, 123
Wavelet transforms, 55
dual tree complex, 8083
non-redundant complex, 8386
Wildes methods, 6566
Within-class scatter matrix, 28
WSQ, see Wavelet Scalar Quantisation
Y
YALE Face database, 37
Z
Zhangs system, 163
See also Shoemark classification systems
UMTS
The Physical Layer of the Universal Mobile
Telecommunications System
A. Springer and R. Weigel
ISBN 3-540-42162-9