Vous êtes sur la page 1sur 42

BSI 251 Biometrics and Applications

FINGERPRINT RECOGNITION
OBJECTIVE

Fingerprint Recognition using Image Enhancement.

INTRODUCTION

Fingerprint recognition or fingerprint authentication refers to the automated method of


verifying a match between two human fingerprints. Fingerprints are one of many forms of
biometrics used to identify an individual and verify their identity. Because of their uniqueness
and consistency over time, fingerprints have been used for over a century, more recently
becoming automated i.e. a biometric due to advancement in computing capabilities Skin on
human fingertips contains ridges and valleys which together forms distinctive patterns. These
patterns are fully developed under pregnancy and are permanent throughout whole lifetime.
Prints of those patterns are called fingerprints. Injuries like cuts, burns and bruises can
temporarily damage quality of fingerprints but when fully healed, patterns will be restored.
Through various studies it has been observed that no two persons have the same fingerprints,
hence they are unique for every individual.

Figure: Two Minutia Features

Department of E&I, SJCE Mysuru Page 1


BSI 251 Biometrics and Applications

Fingerprint authentication refers to the automated method of verifying a match


between two human fingerprints. Fingerprints are one of many forms of biometrics used
to identifyindividuals and verify their identity.

The analysis of fingerprints for matching purposes generally requires the comparison
of several features of the print pattern. These include patterns, which are aggregate
characteristics of ridges, and minutia points, which are unique features found within the
patterns. It is also necessary to know the structure and properties of human skin in order to
successfully employ some of the imaging technologies.

Patterns
The three basic patterns of fingerprint ridges are the arch, loop, and whorl

 Arch: The ridges enter from one side of the finger, rise in the center forming an arc,
and then exit the other side of the finger.
 Loop: The ridges enter from one side of a finger, form a curve, and then exit on that
same side.
 Whorl: Ridges form circularly around a central point on the finger.

Fingerprint processing
Fingerprint processing has three primary functions: enrollment, searching and
verification. Among these functions, enrollment which captures fingerprint image from the
sensor plays an important role. A reason is that the way people put their fingerprints on a
mirror to scan can affect to the result in the searching and verifying process. Regarding to
verification function, there are several techniques to match fingerprints such as correlation-
based matching, minutiae-based matching, ridge feature-based matching and minutiae-based

Department of E&I, SJCE Mysuru Page 2


BSI 251 Biometrics and Applications

algorithm. However, the most popular algorithm was minutiae based matching algorithm due
to its efficiency and accuracy.

Minutiae features
The major minutia features of fingerprint ridges are ridge ending, bifurcation, and
short ridge (or dot). The ridge ending is the point at which a ridge terminates. Bifurcations are
points at which a single ridge splits into two ridges. Short ridges (or dots) are ridges which
are significantly shorter than the average ridge length on the fingerprint. Minutiae and
patterns are very important in the analysis of fingerprints since no two fingers have been
shown to be identical.

Matching algorithms are used to compare previously stored templates of fingerprints


against candidate fingerprints for authentication purposes. In order to do this either the
original image must be directly compared with the candidate image or certain features must
be compared.

Pre-processing helped enhancing the quality of an image by filtering and removing


unnecessary noises. The minutiae based algorithm only worked effectively in 8-bit gray scale
fingerprint image. A reason was that an 8-bit gray fingerprint image was a fundamental base

Department of E&I, SJCE Mysuru Page 3


BSI 251 Biometrics and Applications

to convert the image to 1-bit image with value 0 for ridges and value 1 for furrows. As a
result, the ridges were highlighted with black color while the furrows were highlighted with
white color. This process partly removed some noises in an image and helped enhance the
edge detection. Furthermore, there are two more steps to improve the best quality for the
input image: minutiae extraction and false minutiae removal. The minutiae extraction was
carried out by applying ridge thinning algorithm which was to remove redundant pixels of
ridges. As a result, the thinned ridges of the fingerprint image are marked with a unique ID so
that further operation can be conducted. After the minutiae extraction step, the false minutiae
removal was also necessary. The lack of the amount of ink and the cross link among the
ridges could cause false minutiae that led to inaccuracy in fingerprint recognition process.

Pattern based algorithms compare the basic fingerprint patterns (arch, whorl, and
loop) between a previously stored template and a candidate fingerprint. This requires that the
images can be aligned in the same orientation. To do this, the algorithm finds a central point
in the fingerprint image and centers on that. In a pattern-based algorithm, the template
contains the type, size, and orientation of patterns within the aligned fingerprint image. The
candidate fingerprint image is graphically compared with the template to determine the
degree to which they match.

METHODOLOGY

Histogram Equalization
Histogram equalization is a technique of improving the global contrast of an image by
adjusting the intensity distribution on a histogram.

Enhancement by Fourier transform

The image enhancement by FFT is done by the following formula

Department of E&I, SJCE Mysuru Page 4


BSI 251 Biometrics and Applications

Region of Interest

 The direction for each block of the fingerprint image with WxW in size(W is 16
pixels by default)is estimated.

 The direction for each block of the fingerprint image with WxW in size(W is 16
pixels by default)is estimated.

 The gradient values along x-direction (gx) and y-direction (gy) for each pixel of the
block is calculated.

 For each block, following formula is used to get the Least Square approximation of
the block direction.

Ridge Thinning

Ridge Thinning is to eliminate the redundant pixels of ridges till the ridges are just
one pixel wide.

Minutia Matching

Here we had taken two different sets of fingerprints.

1.Two different angles of a same fingerprint

2. Fingerprints of two different finger.

Using Match Score we distinguish two fingerprints are same or not.

Department of E&I, SJCE Mysuru Page 5


BSI 251 Biometrics and Applications

Applications

 Voter registration and identification.


 Border control via passport verification by using biological parameters.
 Population census by using biometrics.
 Driver’s license and professional ID card verification with biometric identifiers.

Matlab code for Implementation

Clc
Clear all
Close all
%%Load image%%
function image1=loadimage
[imagefile1, pathname]= uigetfile('*.bmp;*.BMP;*.tif;*.TIF;*.jpg','Open An Fingerprint
image');
if imagefile1 ~= 0 cd(pathname);
image1=readimage(char(imagefile1));
image1=255-double(image1);
end
end
%%Histogram Equalization%%
Hist_eq_img = histeq(t4,512);
figure;
imshow(Hist_eq_img); %Display image
title('Image after Histogram Equalisation');
%%Enhancement using FFT%%
function [final]=fftenhance(image,f)
I = 255-double(image);
[w,h] = size(I);
%out = I;
w1=floor(w/32)*32;
h1=floor(h/32)*32;
inner = zeros(w1,h1);

Department of E&I, SJCE Mysuru Page 6


BSI 251 Biometrics and Applications

for i=1:32:w1
for j=1:32:h1
a=i+31;
b=j+31;
F=fft2( I(i:a,j:b) );
factor=abs(F).^f;
block = abs(ifft2(F.*factor));
larv=max(block(:));
iflarv==0
larv=1;
end;
block= block./larv;
inner(i:a,j:b) = block;
end;
end;
final=inner*255;
final=histeq(uint8(final));
%%Region of Interest Detection%%
function [roiImg,roiBound,roiArea] = drawROI(in,inBound,inArea,noShow)
[iw,ih]=size(in);
tmplate = zeros(iw,ih);
[w,h] = size(inArea);
tmp=zeros(iw,ih);
left = 1;
right = h;
upper = 1;
bottom = w;
le2ri = sum(inBound);
roiColumn = find(le2ri>0);
left = min(roiColumn);
right = max(roiColumn);
tr_bound = inBound';
up2dw=sum(tr_bound);
roiRow = find(up2dw>0);

Department of E&I, SJCE Mysuru Page 7


BSI 251 Biometrics and Applications

upper = min(roiRow);
bottom = max(roiRow);
for i = upper:1:bottom
for j = left:1:right
ifinBound(i,j) == 1
tmplate(16*i-15:16*i,16*j-15:16*j) = 200;
tmp(16*i-15:16*i,16*j-15:16*j) = 1;
elseifinArea(i,j) == 1 &inBound(i,j) ~=1
tmplate(16*i-15:16*i,16*j-15:16*j) = 100;
tmp(16*i-15:16*i,16*j-15:16*j) = 1;
end;
end;
end;
in=in.*tmp;
roiImg = in(16*upper-15:16*bottom,16*left-15:16*right);
roiBound = inBound(upper:bottom,left:right);
roiArea = inArea(upper:bottom,left:right);
%inner area
roiArea = im2double(roiArea) - im2double(roiBound);
ifnargin == 3
colormap(gray);
imagesc(tmplate);
end;
%%Ridge Thinning and Matching%%
function [percent_match]=match_end(template1,template2,edgeWidth,noShow)
if or(edgeWidth == 0,isempty(edgeWidth))
edgeWidth=10;
end;
if or(isempty(template1), isempty(template2))
percent_match = -1;
else
length1 = size(template1,1);
minu1 = template1(length1,3);
real_end1 = template1(1:minu1,:);

Department of E&I, SJCE Mysuru Page 8


BSI 251 Biometrics and Applications

ridgeMap1= template1(minu1+1:length1,:);
length2 = size(template2,1);
minu2 = template2(length2,3);
real_end2 = template2(1:minu2,:);
ridgeMap2= template2(minu2+1:length2,:);
ridgeNum1 = minu1;
minuNum1 = minu1;
ridgeNum2 = minu2;
minuNum2 = minu2;
max_percent=zeros(1,3);
for k1 = 1:minuNum1
newXY1 = MinuOriginTransRidge(real_end1,k1,ridgeMap1);
for k2 = 1:minuNum2
if temp > 0
ridgeSimCoef = sum(eachPairP)/( temp^.5 );
end;
ifridgeSimCoef> 0.8
fullXY1=MinuOrigin_TransAll(real_end1,k1);
fullXY2=MinuOrigin_TransAll(real_end2,k2);
minuN1 = size(fullXY1,2);
minuN2 = size(fullXY2,2);
xyrange = edgeWidth;
num_match = 0;
for i=1:minuN1
for j=1:minuN2
if (abs(fullXY1(1,i)-fullXY2(1,j))<xyrange& abs(fullXY1(2,i)-fullXY2(2,j))<xyrange)
angle = abs(fullXY1(3,i) - fullXY2(3,j) );
if or (angle < pi/3, abs(angle-pi)<pi/6)
num_match=num_match+1;
break;
end;
end;
end;
end;

Department of E&I, SJCE Mysuru Page 9


BSI 251 Biometrics and Applications

RESULTS
1. INPUT IMAGE

2. HISTOGRAM EQUALIZATION

3. ENHANCEMENT USING FFT

Department of E&I, SJCE Mysuru Page 10


BSI 251 Biometrics and Applications

4. REGION OF INTEREST

5. RIDGE THINNING

6. MINUTIAMATCHING

Department of E&I, SJCE Mysuru Page 11


BSI 251 Biometrics and Applications

FINGERPRINT RECOGNITION USING


GABOR FILTER
In image processing, a Gabor filter, named after Dennis Gabor, is a linear filter used
for texture analysis, which means that it basically analyses whether there are any specific
frequency content in the image in specific directions in a localized region around the point or
region of analysis. Frequency and orientation representations of Gabor filters are claimed by
many contemporary vision scientists to be similar to those of the human visual system,
though there is no empirical evidence and no functional rationale to support the idea. They
have been found to be particularly appropriate for texture representation and discrimination.
In the spatial domain, a 2D Gabor filter is a Gaussian kernel function modulated by
a sinusoidal plane wave.

A Gabor filter is a linear filter whose impulse response is defined by a harmonic


function multiplied by a Gaussian function. Because of the multiplication-convolution
property (Convolution theorem), the Fourier transform of a Gabor filter's impulse response is
the convolution of the Fourier transform of the harmonic function and the Fourier transform
of the Gaussian function. A Gabor filter is a linear filter used for edge detection in image
processing which is named after Dennis Gabor. Gabor filter frequency and orientation
representations are similar to those of human visual system, for texture representation and
discrimination it has been found to be remarkably appropriate. A sinusoidal plane wave has
been modulating a 2D Gabor filter which is a Gaussian kernel function in the spatial domain.
From one parent wavelet all filters can be generated by dilation and rotation, thus the Gabor
filters are self-similar. With eight different orientations of Gabor filter, features of the
fingerprint are extracting and are combined where f represents the ridge frequency and the
choice of 𝛿𝑥 2 and 𝛿𝑦 2 determines the shape of the filter envelope and also the trade of
between enhancement and spurious artifacts. This is by far, the most popular approach for
fingerprint enhancement

Department of E&I, SJCE Mysuru Page 12


BSI 251 Biometrics and Applications

RESULTS
1. INPUT IMAGE

2. ENHANCED IMAGE

Department of E&I, SJCE Mysuru Page 13


BSI 251 Biometrics and Applications

3. GABOR FILTER

4. ORIENTATION

Department of E&I, SJCE Mysuru Page 14


BSI 251 Biometrics and Applications

5. OUTPUT

Department of E&I, SJCE Mysuru Page 15


BSI 251 Biometrics and Applications

FACE RECOGNITION USING LDA


OBJECTIVE
The objective of LDA is to find the optimal projection so that the ratio of the
determinants of the between-class and within-class scatter matrix of the projected samples
reaches its maximum.

INTRODUCTION
Image recognition has become a popular topic among the researchers because of its
broad usage in many applications such as digital cameras, surveillance camera, image editing
software, Facebook and many more. In Facebook it implements facial recognition technology
that allows all users to semi-automating the photo-tagging process. The face is the most
visible part of human anatomy and serves as the first distinguishing factor of a human being.
It helps a person to distinguish an individual from one to another. Every individual has his
own uniqueness and this could be one of the most transparent and unique feature of a human
being. Face recognition involves comparing an image with a database of stored faces in order
to identify the individual of that input image. The image will first be analyzed and faces can
then be identified, before it can be recognized. While this process may be a trivial task for the
human brain, it has proved to be extremely difficult for the artificial technology to imitate. It
is commonly used in applications such as human- machine interfaces and automatic access
control systems. A facial recognition system is a computer application to automatically
identifying a person from a digital image or a video frame. One way to achieve this is by
comparing selected facial features from the image to a facial database. It is typically used in
security systems and can be compared to other biometrics such as fingerprint or human iris.
Currently, developers came up with the design that is capable of extracting and picking up
faces from the crowd and have it compared to an image source - database. The software has
the ability to know how the basic human face looks like in order for it to work accordingly.
Thus, developers designed these programs (by storing commands) to pinpoint a face and
measure its features. Facial recognition software falls into a larger group of technologies
known as biometrics. Biometrics uses biological information to verify identity. The basic idea
behind biometrics is that our body contains unique properties that can be used to distinguish
us from other persons. Face recognition has a number of advantages over other biometrics.
Firstly, it is non-intrusive. While many biometrics require the subject‘s co-operation and

Department of E&I, SJCE Mysuru Page 16


BSI 251 Biometrics and Applications

awareness in order to perform identification, such as looking into an eye scanner or placing
their hand on a fingerprint reader, face recognition could be performed even without the
subject's knowledge. Secondly, the biometric data used to perform recognition is in a format
that is readable and understood by humans.

Linear Discriminant Analysis

Linear Discriminant is a classical technique in pattern recognition, where it is used to


find a linear combination of features which characterize or separate two or more classes of
objects or events. The resulting combination may be used as a linear classifier or, more
commonly, for dimensionality reduction before it can be classified. In computerized face
recognition, each face is represented by a large number of pixel values. Linear discriminant
analysis is primarily used here to reduce the number of features to a more manageable
number before classification. Unlike the PCA, which process all data without distinguishing
the classes and in which the scatter is maximized along orthogonal directions, the LDA
method, on the other hand, uses a supervised local and global approach, by distinguishing the
classes with respect to the whole observed data. In practice, its aim is to project data on
optimal vectors that minimize their scatter within the same class and maximize it between
different classes. In the below Figure, we notice that the projection of the two classes on the
horizontal axis (i.e. x) leads to an important overlap. The same scenario is observed by
projecting the data on the vertical axis (i.e. y). When projecting the data on an optimal axis,
calculated by the LDA, we find that
1. There is no overlapping of data of the two classes
2. The scatter within each class is reduced.

The linear combinations obtained using Fisher's linear discriminant are called Fisher faces,
while those obtained using the related principal component analysis are called eigenfaces.
LDA doesn‘t change the location but only tries to provide more class separability and draw a
decision region between the given classes.
Each of the new dimensions is a linear combination of pixel values, which form a
template. Data sets can be transformed and test vectors can be classified in the transformed
space by two different approaches.

Department of E&I, SJCE Mysuru Page 17


BSI 251 Biometrics and Applications

Figure: Separation of class using LDA. (a) The two classes overlap once projected on the x
and y axes. (b) Optimal projection using LDA allowing class separation.
1. Class-dependent transformation: This type of approach involves maximizing the ratio
of between class variance to within class variance. The main objective is to maximize this
ratio so that adequate class separability is obtained. The class-specific type approach
involves using two optimizing criteria for transforming the data sets independently.
2. Class-independent transformation: This approach involves maximizing the ratio of
overall variance to within class variance. This approach uses only one optimizing
criterion to transform the data sets and hence all data points irrespective of their class
identity are transformed using this transform. In this type of LDA, each class is
considered as a separate class against all other classes.

Algorithm
Step-1: Computing the d-dimensional mean vectors
Step 2: Computing the Scatter Matrices Now, we will compute the two 4x4-dimensional
matrices: The within-class and the between-class scatter matrix.
2.1 Within-class scatter matrix 𝑆𝑤 The within-class scatter matrix 𝑆𝑤is computed by the
following equation
𝑆𝑤= Σ𝑐Σ(𝛤𝑖𝑗−𝜇𝑗)𝑁𝑗𝑗=1 𝑖=1(𝛤𝑖𝑗−𝜇𝑗)𝑇
Where 𝛤𝑖𝑗 is the 𝑖𝑡ℎ samples of class j, 𝜇𝑗 is the mean of class j, c is the number of classes, 𝑗
is the number of samples in class j.
2.2 The between-class scatter matrix 𝑆𝐵 is computed by the following equation
𝑆𝐵=Σ(𝝁𝒋−𝝁)𝒄𝒋=𝟏(𝝁𝒋−𝝁)𝑻

Department of E&I, SJCE Mysuru Page 18


BSI 251 Biometrics and Applications

Where, μ represents the mean of all classes


Step-3: Solving the generalized eigenvalue problem for the matrix𝑆𝑤−1𝑆𝐵Next, solve the
generalized eigenvalue problem for the matrix𝑆𝑤−1𝑆𝐵to obtain the linear discriminants.
Step-4: Find the optimal vector 𝑊𝑖that maximize the between class scatter and minimize the
within class scatter.
𝑊𝑖= arg max 𝑤𝑇𝑆𝑏𝑤𝑤𝑇𝑆𝑤𝑤
The vector obtained here is also called Fisher faces.
STEP-5: Find the generalized eigenvalues λi using the formula
𝑆𝐵𝑤𝑖=𝜆𝑖𝑆𝑤𝑤𝑖

Representation of faces

Figure: Representation of faces

Department of E&I, SJCE Mysuru Page 19


BSI 251 Biometrics and Applications

The testing phase of LDA

Figure: Block diagram of testing phase of LDA

IMPLEMENTATION
MATLAB CODE
clc;
clearall;
closeall;
TrainDatabasePath='F:\final';
%class1
T = [];
for i = 1 : 10%10 images taken as input
str = int2str(i);
str = strcat('\',str,'.jpg');
str = strcat(TrainDatabasePath,str);
img = imread(str); %read 10 input images from specified folder
img = rgb2gray(img);
[irowicol] = size(img);
temp = reshape(img',irow*icol,1); %create column vectors (Tou)
T = [T temp];
end
m = mean(T,2);

Department of E&I, SJCE Mysuru Page 20


BSI 251 Biometrics and Applications

Train_Number = size(T,2); %2-D size


A = [];
for i = 1 :
Train_Number temp = double(T(:,i)) - m; %Phi
A = [A temp]; %Observation Matrix
end
%class2
TrainDatabasePath1='F:\final';
T1 = [];
for i1 = 1 : 10 %10 images taken as input
str = int2str(i1);
str = strcat('\',str,'.jpg');
str = strcat(TrainDatabasePath1,str);
img1 = imread(str); %read 10 input images from specified folder
img1 = rgb2gray(img1);
[irowicol] = size(img1);
temp1 = reshape(img1',irow*icol,1); %create column vectors (Tou)
T1 = [T1 temp1];
end
m1 = mean(T1,2);
Train_Number1 = size(T1,2); %2-D size
A1 = [];
for i = 1 :Train_Number1
temp1 = double(T1(:,i)) - m1; %Phi
A1 = [A1 temp1]; %Observation Matrix
end
% Number of observations of each class
n1=size(T,2)
n2=size(T1,2)
N=n1+n2 % Average of the mean of all classes
mu=((n1/N)*m'*m1) % Calculate the within class variance (SW)
s1=A'*A;
s2=A1'*A1;
sw=s1+s2

Department of E&I, SJCE Mysuru Page 21


BSI 251 Biometrics and Applications

invsw=inv(sw)
sb1=n1*(m-mu)'*(m-mu)
sb2=n2*(m1-mu)'*(m1-mu)
SB=sb1+sb2
v=invsw*SB
% % find eigne values and eigen vectors of the (v)
% % % project the data of the first and second class respectively
% y1=T*evec(:,2)
% y2=T1*evec(:,2)
[V D] = eig(v); %Eigen Values in D and Vectors in V
L_eig_vec = [];
for i = 1 : size(V,2) if( D(i,i)>1 )
L_eig_vec = [L_eig_vecV(:,i)]; %Concatenation of all the Eigen Vectors
end
end
Eigenfaces = A * L_eig_vec; %Eigen Faces (M-1) Faces
for k=1:9
subplot(2,5,k)
evector=vec2mat(Eigenfaces(:,k),icol); %display eigenfaces
imshow(evector);
title('Fisher Faces');
end
imshow(Eigenfaces) ProjectedImages = [];
Train_Number = size(Eigenfaces,2);
for i = 1 : Train_Number
temp = Eigenfaces'*A(:,i); %Weighted Co-efficients
ProjectedImages = [ProjectedImages temp]; % imshow(ProjectedImages);
end
TestImage=input('Enter image for recognition:' ); %Input Image
figure
,imshow(TestImage);
title('Test Image');
InputImage = imread(TestImage);
temp=rgb2gray(InputImage);

Department of E&I, SJCE Mysuru Page 22


BSI 251 Biometrics and Applications

[irowicol] = size(temp);
InImage = reshape(temp',irow*icol,1); %Coverting the image to Column
Vector Difference = double(InImage)-m; %Tou.bar - Psi
ProjectedTestImage = Eigenfaces'*Difference;
Euc_dist = [];
for i = 1 : Train_Number
q = ProjectedImages(:,i);
temp = ( norm( ProjectedTestImage - q ) )^2;
Euc_dist = [Euc_dist temp]; %Euclidian Distance
end
[Euc_dist_min ,Recognized_index] = min(Euc_dist);
OutputName = strcat(int2str(Recognized_index),'.jpg');
disp ('Recognized Image is : ');
disp(OutputName);
figure,
imshow(OutputName);
title('Recognizadt Image');

RESULTS
n1 =10
n2 =10
N =20
mu =1.4918e+008
sw =1.0e+008 *
% Column vector of withinclass matrix Columns 1 through 9
1.4734-0.2849 -0.3296 0.5690 -0.7264 -0.2236 -0.6516 0.0165 -0.1783 -0.2849
1.3783 -0.4315 -0.0981 -0.7912 0.7798 -0.6274 0.0111 -0.2730 -0.3296 -0.4315
1.3693 -0.3838 0.2787 -0.2287 0.1501 -0.1423 0.1749 0.5690 -0.0981 -0.3838
1.5038 -0.9054 -0.1176-0.82950.2488 -0.0203 -0.7264 -0.7912 0.2787 -0.9054
2.4835 -1.0471 1.7509 -0.3913 0.2792 -0.2236 0.7798 -0.2287 -0.1176 -1.0471
1.7605 -0.8772 -0.0244 -0.4278 -0.6516 -0.6274 0.1501 -0.8295 1.7509 -0.8772
2.2417 -0.3403 0.2337 0.0165 0.0111 -0.1423 0.2488 -0.3913 -0.0244 -0.3403
0.6992 -0.0851 -0.1783 -0.2730 0.1749 -0.0203 0.2792 -0.4278 0.2337 -0.0851
0.7876 0.3356 0.3369 -0.4569 0.0331 -0.9309 0.4061 -1.0502 0.0078 -0.4909

Department of E&I, SJCE Mysuru Page 23


BSI 251 Biometrics and Applications

Column 10
0.3356
0.3369
-0.4569
0.0331
-0.9309
0.4061
-1.0502
0.0078
-0.4909
1.8093
%inverse of within class matrix
invsw = 1.0e+006 *
%Column vector of inverse of within class matrix Columns 1 through 9
8.9478 8.9478 8.9478 8.9478 8.9478 8.9478 8.9478 8.9478 8.9478 8.9478 8.9478
8.9478 8.9478 8.9478 8.9478 8.9478 8.9478 8.9478 8.9478 8.9478 8.9478 8.9478
8.9478 8.9478 8.9478 8.9478 8.9478 8.9478 8.9478 8.9478 8.9478 8.9478 8.9478
8.9478 8.9478 8.9478 8.9478 8.9478 8.9478 8.9478 8.9478 8.9478 8.9478 8.9478
8.9478 8.9478 8.9478 8.9478 8.9478 8.9478 8.9478 8.9478 8.9478 8.9478 8.9478
8.9478 8.9478 8.9478 8.9478 8.9478 8.9478 8.9478 8.9478 8.9478 8.9478 8.9478
8.9478 8.9478 8.9478 8.9478 8.9478 8.9478 8.9478 8.9478 8.9478 8.9478 8.9478
8.9478 8.9478 8.9478 8.9478 8.9478 8.9478 8.9478 8.9478 8.9478 8.9478 8.9478
8.9478 8.9478
Column 10
8.9478
8.9478
8.9478
8.9478
8.9478
8.9478
8.9478
8.9478
8.9478
8.9478

Department of E&I, SJCE Mysuru Page 24


BSI 251 Biometrics and Applications

% between class matrix


sb1 =8.0115e+021
sb2 =8.0115e+021
SB =1.6023e+022
%Eigen vector
v =1.0e+029 *
%column vector of eigen vector Columns 1 through 9
1.4337 1.4337 1.4337 1.4337 1.4337 1.4337 1.4337 1.4337 1.4337 1.4337 1.4337 1.4337
1.4337 1.4337 1.4337 1.4337 1.4337 1.4337 1.4337 1.4337 1.4337 1.4337 1.4337 1.4337
1.4337 1.4337 1.4337 1.4337 1.4337 1.4337 1.4337 1.4337 1.4337 1.4337 1.4337 1.4337
1.4337 1.4337 1.4337 1.4337 1.4337 1.4337 1.4337 1.4337 1.4337 1.4337 1.4337 1.4337
1.4337 1.4337 1.4337 1.4337 1.4337 1.4337 1.4337 1.4337 1.4337 1.4337 1.4337 1.4337
1.4337 1.4337 1.4337 1.4337 1.4337 1.4337 1.4337 1.4337 1.4337 1.4337 1.4337 1.4337
1.4337 1.4337 1.4337 1.4337 1.4337 1.4337 1.4337 1.4337 1.4337 1.4337 1.4337 1.4337
1.4337 1.4337 1.4337 1.4337 1.4337 1.4337
Column 10
1.4337
1.4337
1.4337
1.4337
1.4337
1.4337
1.4337
1.4337
1.4337
1.4337

Enter image for recognition:'1.jpg'


Recognized Image is :1.jpg

Department of E&I, SJCE Mysuru Page 25


BSI 251 Biometrics and Applications

Test image Recognized image

Department of E&I, SJCE Mysuru Page 26


BSI 251 Biometrics and Applications

FACE RECOGNITION USING PCA


Biometrics is used in the process of authentication of a person by verifying
oridentifying that a user requesting a network resource is who he, she, or it claimsto be, and
vice versa. It uses the property that a human trait associated with aperson itself like structure
of finger, face details etc. By comparing the existingdata with the incoming data we can
verify the identity of a particular person.There are many types of biometric system like
fingerprint recognition, face detection and recognition, iris recognition etc., these traits are
used for human identification in surveillance system, criminal identification. Advantages of
using these traits for identification are that they cannot be forgotten or lost. These are unique
features of a human being which is being used widely.

OBJECTIVE
Face recognition using Principal Component Analysis (PCA). Face Recognition Face
is a complex multidimensional structure and needs good computing techniques for
recognition. The face is our primary and first focus of attention in social life playing an
important role in identity of individual. Features extracted from a face are processed and
compared with similarly processed faces present in the database. If a face is recognized it is
known or the system may show a similar face existing in database else it is unknown.

Principal Component Analysis (PCA)


Principal component analysis (PCA) was invented in 1901 by Karl Pearson. PCA is a
variable reduction procedure and useful when obtained data have some redundancy. Goal of
PCA is to reduce the dimensionality of the data by retaining as much as variation possible in
our original data set. On the other hand dimensionality reduction implies information loss.
The major advantage of PCA is using it in eigenface approach which helps in reducing the
size of the database for recognition of a test images.

Eigen Face Approach


It is adequate and efficient method to be used in face recognition due to its simplicity,
speed and learning capability. Each image location contributes to each Eigen vector, so that
we can display the Eigen vector as a sortof face. Each face image can be represented exactly
in terms of linear combination of the Eigen faces. The number of possible Eigen faces is
equal to the number of face image in the training set. The faces can also be approximated by
using best Eigen face, those that have the largest Eigen values, and which therefore account
for most variance between the set of face images.

Department of E&I, SJCE Mysuru Page 27


BSI 251 Biometrics and Applications

ALGORITHM
Learning phase
Step 1: Obtain face images I1, I2,……,I𝑀(training faces).Each face image can be
represented as a matrix of size NxN.

For example: we extract 2x2 matrix from the obtained face image using following matlab
command (I=img(2:3,2:3);).

Step 2: Represent every image I𝑖 as avector £i

Department of E&I, SJCE Mysuru Page 28


BSI 251 Biometrics and Applications

Step 3: Compute the average face vector Ψ.

Step 4: Subtract mean face image to get column zero mean vector.

Step 5: Compute the covariance matrix C.A covariance matrix is constructed as

Step 6: Computation of Eigen vectors u of 𝐴𝐴𝑡.Reasons for going with 𝐴𝑡𝐴 are
I. Dimension reduced to MxM instead of 𝑁2x𝑁2
II. Calculation of Eigen vectors v𝑘 for matrix 𝐴𝑡𝐴 is easier than 𝐴𝐴𝑡. We can
easily find Eigen vectors of 𝐴𝐴𝑡 using the equation

Department of E&I, SJCE Mysuru Page 29


BSI 251 Biometrics and Applications

u𝑘 =Av𝑘 k=1,….,M-1

The eigenvalues and eigenvectors are

Eigen vectors (Eigen faces) of matrix 𝐴𝐴𝑡 are given by: u𝑖 =Av𝑖

Step 7: Each face in the training set can be represented as a linear combination of the
eigenvectors, weighted by coefficients w𝑘. By adding mean image, 𝑖 is expressed by

Can also be expressed as

Each coefficients w𝑖𝑘 vector is obtained by projecting the column zero-mean vector on the
corresponding Eigenvector ‘u’.

Department of E&I, SJCE Mysuru Page 30


BSI 251 Biometrics and Applications

Finally, the feature vectors of each image should be stored.

Recognition phase
Recognitionoperation perform in two modes.
1. Verification mode
2. Identification mode
1.Verification mode
 Declares his identity first.
 Feature vector is ‘w’ extracted from a newly acquired image 𝜏
 Matched using distances like Euclidean and hamming.

2.Identification mode

 Feature vector ‗w‘ is compared with the set of all vectors in the whole database.
 Using a simple distance formulas such as Euclidean and Hamming distances.

Department of E&I, SJCE Mysuru Page 31


BSI 251 Biometrics and Applications

For example,
Step 1: obtain test Face image I.

Step 2: Represent every image I𝑖 as a vector 𝜏

Department of E&I, SJCE Mysuru Page 32


BSI 251 Biometrics and Applications

Step 3: Subtract mean face image to get column zero mean vector.

Step 4: project the mean center image on to the eigen face and weights vector is calculated
as.

Step 5: Recognition is done by calculating the Euclidean distance between the feature vector
of test and training set face images.
ED=|| w-w𝑖𝑘 ||
Euclidean Distance between 1sttrainining image and test image is ED1=||w-𝑤1||=0
Euclidean Distance between 2nd trainining image and test image is ED2=||w- 𝑤2||=854.5
Among the two distance,ED1 and ED2,ED1 is the minimum distance hence the 1st face
image I1 is matched with the test face image.

IMPLEMENTATION
MATLAB CODE

clc;
closeall;
clearall;
TrainDatabasePath = uigetdir('D:\Program Files\MATLAB\R2006a\work', 'Select training
database path' );
TestDatabasePath = uigetdir('D:\Program Files\MATLAB\R2006a\work', 'Select test
database path');
TrainFiles = dir(TrainDatabasePath);
Train_Number = 0;
for i = 1:size(TrainFiles,1)

Department of E&I, SJCE Mysuru Page 33


BSI 251 Biometrics and Applications

if
not(strcmp(TrainFiles(i).name,'.')|strcmp(TrainFiles(i).name,'..')|strcmp(TrainFiles(i).name,'T
humbs.db'));
Train_Number = Train_Number + 1; % Number of all images in the training database
end
end
%Concatenation%
T = [];
for i = 1 : Train_Number
str = int2str(i);
str = strcat('\',str,'.jpg');
str = strcat(TrainDatabasePath,str);
img = imread(str);
img = rgb2gray(img);
[irowicol] = size(img);
temp = reshape(img',irow*icol,1); % Reshaping 2D images into 1D image vectors
T = [T temp]; % 'T' grows after each turn
imshow(T)
end
disp('Size of concatenated matrix, T=');
disp(size(T));
%Mean of the face images%
m = mean(T,2);
Train_Number = size(T,2)
disp('Size of the mean, m =');
disp(size(m));
m2=repmat(m,1,20); %To repeat the column vector 20 times
%Observation matrix a%
A = [];
for i = 1 : Train_Number
temp = double(T(:,i)) - m; % Computing the difference image for each image in the
training set Ai = Ti - m
A = [A temp]; % Merging all centered images
end

Department of E&I, SJCE Mysuru Page 34


BSI 251 Biometrics and Applications

disp('Size of Observation matrix, A=');


disp(size(A));
%Eigen value and eigen vector%
L = A'*A; % L is the surrogate of covariance matrix
C=A*A'.
disp('size of At*A is');
disp(size(L)); [V D] = eig(L); % Diagonal elements of D is the eigenvalues for both
L=A'*A and C=A*A'.
disp('Eigen vector of At*A is=');
disp(size(V));
L_eig_vec = [];
for i = 1 : size(V,2)
if( D(i,i)>1 )
L_eig_vec = [L_eig_vecV(:,i)];
end
end%disp(L_eig_vec); %disp(V);
[p q]= size(D);
disp('No. of Eigen Values = ');
disp(p);
disp('Size of Eigen Vector (M-1) Matrix = ');
disp(size(L_eig_vec));
%Eigen faces%
Eigenfaces = A * L_eig_vec; % A: centered image vectors
disp('size of eigen faces matrix');
disp(size(Eigenfaces));
for k = 1 : 10
subplot(2,5,k)
evector=vec2mat(Eigenfaces(:,k),icol); % Display Eigen Faces
imshow(evector);
title('Eigen Faces');
end
%IMAGE PROJECTION%%%
ProjectedImages = [];
Train_Number = size(T,2);

Department of E&I, SJCE Mysuru Page 35


BSI 251 Biometrics and Applications

for i = 1 : Train_Number
temp = Eigenfaces'*A(:,i); % Projection of centered images into facespace
ProjectedImages = [ProjectedImages temp];
end%disp('ProjectedImages');
disp(ProjectedImages); %imshow(ProjectedImages);
disp('size of projected images, W=');
disp(size(ProjectedImages));
%%%Image reconstruction%%%
T1=Eigenfaces*ProjectedImages;
disp('Size of T1, (U*W)=');
disp(size(T1));
C=m2+T1;
disp('size of reconstructed image Ti=');
disp(size(C)); C1=C(:,1:1);
C2=reshape(C1,180,200); %1-D to 2-D conversion (with rows and columns interchanged)
imshow(C2);
C3=C2'; %Taking transpose to get actual matrix
imshow(C3);
title('Reconstructed face image');
figure
disp('Size of reconstructed original face image');
disp(size(C3));
%Extracting the pca features from test image%
prompt = {'Enter test image name (a number between 1 to 10):'};
dlg_title = 'Input of PCA-Based Face Recognition System';
num_lines= 1;
def = {'1'};
TestImage =inputdlg(prompt,dlg_title,num_lines,def);
TestImage = strcat(TestDatabasePath,'\',char(TestImage),'.jpg');
im = imread(TestImage);
InputImage = imread(TestImage);
temp = InputImage(:,:,1);
[irowicol] = size(temp);
InImage = reshape(temp',irow*icol,1);

Department of E&I, SJCE Mysuru Page 36


BSI 251 Biometrics and Applications

Difference = double(InImage)-m; % Centered test image


ProjectedTestImage = Eigenfaces'*Difference; % Test image feature vector
disp('Projected test image size');
disp(size(ProjectedTestImage));
%Euclidian distance%
Euc_dist = [];
for i = 1 : Train_Number
q = ProjectedImages(:,i);
temp = norm( ProjectedTestImage - q );
Euc_dist = [Euc_dist temp];
disp(temp);
end
[Euc_dist_min ,Recognized_index] = min(Euc_dist);
OutputName = strcat(int2str(Recognized_index),'.jpg');
disp('Minimum Euclidian distance=');
disp(Euc_dist_min); %disp('Recognized_index');
disp(Recognized_index);
%Displying matched image%
T = CreateDatabase(TrainDatabasePath);
[m, A, Eigenfaces] = EigenfaceCore(T);
OutputName = Recognition(TestImage, m, A, Eigenfaces);
SelectedImage = strcat(TrainDatabasePath,'\',OutputName);
SelectedImage = imread(SelectedImage);
imshow(im);
title('Test Image');
figure
imshow(SelectedImage);
title('Equivalent Image');
str = strcat('Matched image is : ',OutputName);
disp(str);

Department of E&I, SJCE Mysuru Page 37


BSI 251 Biometrics and Applications

RESULT
Command window Output
Size of concatenated matrix,T=
36000
20 Train_Number =
20
Size of the mean, m =
36000 1
Size of Observation matrix, A=
36000 20
Size of At*A is
20 20
Eigen vector of At*A is=
20 20
No. of Eigen Values =
20
Size of Eigen Vector (M-1) Matrix =
20 19
Size of Eigen faces matrix
36000 19
Size of projected images, W=
19 20
Size of T1, (U*W)=
36000 20
Size of reconstructed image Ti=
36000 20
Size of reconstructed original face image
200 180
Projected test image size
19 1
Euclidean distances =
2.8143e+07
5.4322e+07

Department of E&I, SJCE Mysuru Page 38


BSI 251 Biometrics and Applications

1.8500e+08
2.2403e+08
1.6761e+08
1.7827e+08
7.8986e+07
7.8513e+07
3.7335e+08
3.7284e+08
2.2258e+08
2.1829e+08
3.5823e+08
3.5860e+08
1.3129e+08
1.1861e+08
1.6918e+08
1.6878e+08
1.3866e+08
1.2564e+08
Minimum Euclidian distance=
2.8143e+07
Matched image is:1.jpg
Figure window Output

Department of E&I, SJCE Mysuru Page 39


BSI 251 Biometrics and Applications

Reconstructed face image

Test image Equivalent image

Department of E&I, SJCE Mysuru Page 40


BSI 251 Biometrics and Applications

Discrete Cosine Transform (DCT)


INTRODUCTION:

A transform is a mathematical operation that when applied to a signal that is being


processed converts it into a different domain and then can be again is converted back to the
original domain by the use of inverse transform. The transforms gives us a set of
coefficients from which we can restore the original samples of the signal. Some
mathematical transforms have the ability to generate decorrelated coefficients such that
most of the signal energy is concentrating in a reduced number of coefficients.
The Discrete Cosine Transform (DCT) also attempts to decorrelate the image data as other
transforms. After decorrelation each transform coefficient can be encoded independently
without losing compression efficiency. It expresses a finite sequence of data points in
terms of a sum of cosine functions oscillating at different frequencies. The DCT coefficients
reflect different frequency component that are present in it. The first coefficient
refers to the signal’s lowest frequency(DC component) and usually carries the majority
of the relevant information from the original signal. The coefficients present at the end
refer to the signal’s higher frequencies and these generally represent the finer detailed. The
rest of the coefficients carry different information levels of the original signal.
PCA IN DCT DOMAIN
In the pattern recognition letter by Weilong Chen, Meng Joo Er , Shiqian Wu it has
been proved that we can apply the PCA directly on the coefficient of Discrete Cosine
Transform. When PCA is applied on a orthogonally transformed version of the original
data then the subspace projection obtained is same as compared to what is obtained
by PCA on the original data. DCT and Block-DCT (it is the process of dividing the
images into small blocks and then taking the DCT of each subimage) are also orthogonal
transform, we can apply PCA on it without any reduction in the performance.
BASIC ALGORITHM FOR FACE RECOGNITION:
The basic Face Recognition Algorithm is discussed below. Both normalization and
recognition are involved in it. The system receives as input an image containing a face .The
normalized (and cropped) face is obtained and then it can be compared with other faces in
the training set, under the same normalised condition conditions like nominal size,
orientation and position. This comparison is done by comparing the features extracted

Department of E&I, SJCE Mysuru Page 41


BSI 251 Biometrics and Applications

using the DCT. The basic idea here is to compute the DCT of the normalized face and
retain a certain subset of the DCT coefficients as a feature vector describing this face.

FIG: Basic algorithm of DCT

This feature vector contains the mostly low and mid frequency DCT coefficients, as these
are the ones that have maximum information contain and highest variance. The feature
vector which we obtain is still a very large in dimension. From the above discussion
we know that PCA can be used in DCT domain without any change in the principal
component. So we use the technique of PCA discussed in the previous section for reducing
the dimensionality of the feature vector.
Once we have defined the face space with the help of Eigen vectors , then we can find the
projection of the feature vectors in that space. The projection of the input face and the
projection of the faces in the data base are compared by finding out the Euclidean distance
between them. A match is obtained by minimising the Euclidean distance.

Department of E&I, SJCE Mysuru Page 42

Vous aimerez peut-être aussi