Vous êtes sur la page 1sur 7

(IJCSIS) International Journal of Computer Science and Information Security,

Vol. 10, No. 3, March 2012


Image Classification in Transform Domain

Dr. H. B. Kekre
Professor,
Computer Engineering
Mukesh Patel School of Technology
Management and Engineering,
NMIMS University, Vileparle(w)
Mumbai 400056, India
hbkekre@yahoo.com.

Dr. Tanuja K. Sarode
Associate Professor,
Computer Engineering,
Thadomal Shahani Engineering
College,
Bandra(W), Mumbai 400-050, India
tanuja_0123@yahoo.com


Jagruti K. Save
Ph.D. Scholar, MPSTME,
NMIMS University,
Associate Professor,
Fr. C. Rodrigues College of
Engineering, Bandra(W), Mumbai
400-050, India
jagrutik_save@yahoo.com


Abstract Organizing images into meaningful categories using
low level or high level features is an important task in image
databases. Although image classification has been studied for
many years, it is still a challenging problem within multimedia
and computer vision. In this paper the generic image
classification approach using different transforms is proposed.
The two main steps in image classification are feature extraction
and classification algorithm. This paper proposes to generate
feature vector from image transform. The paper also investigates
the effectiveness of different transforms (Discrete Fourier
Transform, Discrete Cosine Transform, Discrete Sine Transform,
Hartley and Walsh Transform) in classification task. The size of
feature vector also varied to see its impact on the result.
Classification is done using nearest neighbor classifier. Euclidean
and Manhattan distance is used to calculate the similarity
measure. Images from the Wang database are used to carry out
the experiments. The experimental results and detailed analysis
are presented.
Keywords- Image classification; Image Transform; Discrete
Fourier Transform (DFT); Discrete Sine Transform(DST);
Discrete Cosine Transform(DST); Hartley Transform; Walsh
Transform; Nearest neighbor Classifier.
I. INTRODUCTION
Though the image classification is usually not a very
difficult task for humans, it has been proven to be an extremely
complex task for machines. In the existing literatures, most of
the frameworks for image classification include two main
steps: feature extraction and classification algorithm. In the first
step, some discriminative features are extracted to represent the
image content such as color [1] [2], shape [3] and texture [4].
There has been a lot of research work done in the area of
feature extraction. Saliency map is used to extract features to
classify both the query image and database images into
attentive and non-attentive classes [5]. The image texture
feature is calculated based on gray-level co-occurrence matrix
(GLCM) [6]. Color Co-occurrence method in which both the
color and texture of an image are taken into account, is used to
generate the features [7]. Transforms have been applied to gray
scale image to generate feature vector [8]. In classification
algorithm step, various multi-class classifiers like k nearest
neighbor classifier [9], Support Vector Machine (SVM) [10]
[11], Artificial Neural Network [12] [13], Genetic algorithm
[14] are used.
II. IMAGE TRANSFORMS
A. Discrete Fourier Transform (DFT)
The discrete Fourier transform (DFT) is one of the most
important transforms that is used in digital signal processing
and image processing [15]. Two dimensional discrete Fourier
transform for an image f(x, y) of size N by N is given by
equation 1.

1 N v u, 0 for
1 N
0 y
N
vy
N
ux
j2
y)e f(x,
1 N
0 x
v) F(u,

=
+

=
=
|
|

\
|
(1)
B. Discrete Cosine Transform (DCT)
The discrete cosine transform (DCT), introduced by
Ahmed, Natarajan and Rao [16], has been used in many
applications of digital signal processing, data compression,
information hiding and content based Image Retrieval
system(CBIR)[17]. The discrete cosine transform (DCT) is
closely related to the discrete Fourier transform. It is a
separable linear transformation; that is, the two-dimensional
transform is equivalent to a one-dimensional DCT performed
along a single dimension followed by a one-dimensional DCT
in the other dimension. The two dimensional DCT can be
written in terms of pixel values f(x, y) for x, y= 0, 1,, N-1
and the frequency-domain transform coefficients F(u, v) as
shown in equation 2.

( ) ( )
1 N v u, 0 for
2N
v 1 2y
cos
2N
u 1 2x
cos y) f(x, (v) (u)
v) F(u,

(

+
(

+

=

(2)
Where
91 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 10, No. 3, March 2012
1 1
2
) (
0 / 1 ) (
1 1
2
) (
0 / 1 ) (
=
= =
=
= =
N v for
N
v
v for N v
N u for
N
u
u for N u


C. Discrete Sine Transform (DST)
The discrete sine transform was introduced by A. K. Jain in
1974. The two dimensional sine transform is defined by an
equation 3.

( )( )
( )( )
1 N v u, 0 for
1 N
1 v 1 y
sin
1 N
1 u 1 x
y)sin f(x,
1 N
2
v) F(u,

(

+
+ +
(

+
+ +

+
=

(3)

Discrete Sine transform has been widely used in signal and
image Processing [18] [19].
D. Discrete Hartley Transform (DHT)
The Hartley transform [20] is an integral transform closely
related to the Fourier transform. It has some advantages over
the Fourier transform in the analysis of real signals as it avoids
the use of complex arithmetic.
A discrete Hartley transform (DHT) is a Fourier-related
transform of discrete, periodic data similar to the discrete
Fourier transform (DFT), with analogous applications in signal
processing and related fields [21]. Its main distinction from the
DFT is that it transforms real inputs to real outputs, with no
intrinsic involvement of complex numbers. Just as the DFT is
the discrete analogue of the continuous Fourier transform, the
DHT is the discrete analogue of the continuous Hartley
transform. The discrete two dimensional Hartley Transform for
image of size N x N is defined as in equation 4.
( )

sin cos
2
cas y) f(x,
N
1
v) F(u,
+ =
(

+
=
cas where
vy ux
N
(4)
E. Discrete Walsh Transform (DWT))
The Walsh Transform [22] has become quite useful in the
applications of image processing [23] [24]. Walsh functions
were established as a set of normalized orthogonal functions,
analogous to sine and cosine functions, but having uniform
values 1 throughout their segments. The Walsh transform
matrix is defined as a set of N rows, denoted Wj, for j = 0, 1, ...
, N - 1, which have the following properties:
Wj takes on the values +1 and -1
Wj[0] = 1 for all j
Wj x [Wk]
t
=0, for jk and Wj x [Wk]
t
=N, for
j=k.
Wj has exactly j zero crossings, for j = 0, 1,..., N-1
Each row Wj is even (when j is even) and odd (when j is
odd) w.r.t. to its midpoint.
III. ROW MEAN VECTOR
The row mean vector [25] [26] is the set of averages of the
intensity values of the respective rows as shown in equation 5.

(
(
(
(
(
(

=
N) Avg(Row
:
:
2) Avg(Row
1) Avg(Row
r mean vecto Row (5)
IV. PROPOSED ALGORITHM
The image database is divided into a training set and a
testing set. The feature vector of each training/testing image is
calculated. Given an image to be classified from testing set, a
nearest neighbor classifier compares it against the images of a
training set, in order to identify the most similar image and
consequently the correct class. Euclidean and Manhattan
distance is used as similarity measure.
A. Generation of feature vector
1. For each color image f(x,y), generate its three color
(R, G, and B) planes f
R
(x,y), f
G
(x,y) and f
B
(x,y)
respectively.
2. Apply transform T (DCT, DFT, DST, HARTLEY,
WALSH) on the columns of three image planes as
given in equation 6 to 8 to get column transformed
images.
[ ] [ ] ) , ( ) , ( v x F y x f T R R = (6)
[ ] [ ] ) , ( ) , ( v x F y x f T G G = (7)
[ ] [ ] ) , ( ) , ( v x F y x f T B B = (8)
3. Calculate row mean vector of each column
transformed image.
4. Make a feature vector of size 75 by fusing the row
mean vectors of R, G, and B plane. Take first 25
values from R plane followed by first 25 values from
G plane followed by first 25 values from B plane.
Identify applicable sponsor/s here. (sponsors)
92 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 10, No. 3, March 2012
5. Do the above process for training images to generate
the feature database.
The different values of feature vector size like 150 (50R +
50G + 50B), 225 (75R + 75G + 75B), 300 (100R + 100G +
100B), 450(150R + 150G + 150B), and 768 (256R + 256G +
256B) are also considered to generate feature vectors.
B. Classification
1. In this phase, for given testing images, their feature
vectors are generated.
2. Euclidean distance and Manhattan distance is
calculated between each testing image feature vector
and each training image feature vector.
3. Minimum distance indicates the most similar training
image for that testing image. Then the given testing
image is assigned to the corresponding class.
We have also considered another training set where each
feature vector is the average of feature vectors of all training
images of a particular class.
V. RESULTS
The implementation of the proposed technique is done in
MATLAB 7.0 using a computer with Intel Core 2 Duo
Processor T8100 (2.1GHz) and 2 GB RAM. The proposed
technique is tested on the Wang image database. This database
was created by the group of professor Wang from the
Pennsylvania State University [27]. The experiment is carried
on 8 classes of Wang database. For testing, 30 images for each
class were used and for training, 5 images of each class were
used. Thus total testing images were 240 and total training
images were 40. Training set contains 40 feature vectors. The
proposed method is also implemented using another training
set that contain 8 feature vectors where each feature vector is
the average of feature vectors of all training images of same
class. Fig. 1 shows the sample database of training images and
Fig. 2 shows the sample database of testing images.

Figure 1. Sample database of training images


Figure 2. Sample database of testing images
Each image is resized to 256 x 256. Table I and Table II shows
the number of correctly classified total images (out of 240) for
different transforms over different vector sizes for two different
training sets. The correctness of classification is visually
checked.
With average training set Walsh transform gives better
performance compared to other transforms with Manhattan as
similarity measure. If Euclidean distance is used for
calculation then feature vector size of 768 gives the marginally
better performance in all transforms. Considering the results as
shown in Table 1, best results are obtained for Manhattan
distance as similarity measure. DST Walsh and DFT gave
better performance in that order.

Now considering individual class classification performance
using these two similarity measures is shown in Table III to
Table VI. For this purpose the vector size is selected based on
the performance. For Euclidean distance criterion, the number
of correctly classified images in each class for different
transforms over two training sets is shown in table III and table
IV with feature vector size 768. If a Manhattan distance
criterion is used, then there is a variation in the performance of
the transforms for different feature vector sizes. In most cases
vector size 225 gives better performance. So using this vector
size, the number of correctly classified images in each class for
different transforms over two training sets is shown in table V
and table VI.






93 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 10, No. 3, March 2012
TABLE I. NUMBER OF CORRECTLY CLASSIFIED IMAGES (OUT OF 240) FOR DFT, DCT, DST, HARTLEY AND WALSH OVER DIFFERENT FEATURE VECTOR
SIZES USING EUCLIDEAN AND MANHATTAN DISTANCE., TRAINING SET: FEATURE VECTORS OF 5 IMAGES FROM EACH CLASS
Transform Distance
E-Euclidean
M-Manhattan
Feature vector size

75

150 225 300 450 768
E 155 159 159 159 160 167
DFT
M 166 163 169 169 164 163
E 151 156 159 162 162 163
DCT
M 163 167 169 170 164 163
E 159 160 160 160 161 160
DST
M 164 173 176 174 168 161
E 148 150 151 151 152 158
HARTLEY
M 154 162 165 167 161 161
E 149 152 155 156 160 161
WALSH
M 160 162 166 170 171 170
TABLE II. NUMBER OF CORRECTLY CLASSIFIED IMAGES (OUT OF 240) FOR DFT, DCT, DST, HARTLEY AND WALSH OVER DIFFERENT FEATURE VECTOR
SIZES USING EUCLIDEAN AND MANHATTAN DISTANCE., TRAINING SET: AVERAGE OF FEATURE VECTORS OF 5 IMAGES FROM EACH CLASS
Transform Distance
E-Euclidean
M-Manhattan
Feature vector size

75

150 225 300 450 768
E 155 160 162 162 161 166
DFT
M 175 173 171 169 164 156
E 156 158 157 159 160 160
DCT
M 171 172 169 168 163 156
E 161 160 160 159 161 161
DST
M 161 162 168 169 169 164
E 159 162 161 162 163 167
HARTLEY
M 169 168 172 171 168 164
E 155 157 158 158 158 159
WALSH
M 179 175 173 169 169 159

TABLE III. TOTAL CLASSIFIED IMAGES (OUT OF 30 IMAGES) IN EACH
CLASS FOR DIFFERENT TRANSFORMS, VECTOR SIZE: 768, DISTANCE
CRITERIA: EUCLIDEAN DISTANCE, TRAINING: FEATURE VECTORS OF 5
IMAGES FROM EACH CLASS
Classes DFT DCT DST HARTLEY WALSH
Beach 15 14 11 14 11
Monument 10 13 7 9 8
Bus 24 21 27 22 25
Dinosaur 30 30 30 30 30
Elephant 24 23 23 24 24
Flower 27 25 26 27 25
Horse 26 28 26 25 28
Snow Mountain 11 9 10 7 10
TABLE IV. TOTAL CLASSIFIED IMAGES (OUT OF 30 IMAGES) IN EACH
CLASS FOR DIFFERENT TRANSFORMS, VECTOR SIZE: 768, DISTANCE
CRITERIA: EUCLIDEAN DISTANCE, TRAINING: AVERAGE OF FEATURE
VECTORS OF 5 IMAGES FROM EACH CLASS
Classes DFT DCT DST HARTLEY WALSH
Beach 20 18 14 19 17
Monument 3 4 9 6 5
Bus 23 24 25 23 24
Dinosaur 30 30 30 30 30
Elephant 25 22 24 25 24
Flower 30 30 29 30 30
Horse 16 17 17 16 16
Snow Mountain 19 15 13 18 13

TABLE V. TOTAL CLASSIFIED IMAGES (OUT OF 30 IMAGES) IN EACH
CLASS FOR DIFFERENT TRANSFORMS, VECTOR SIZE: 225, DISTANCE
CRITERIA: DISTANCE CRITERIA: MANHATTAN DISTANCE, TRAINING:
FEATURE VECTORS OF 5 IMAGES FROM EACH CLASS
Classes DFT DCT DST HARTLEY WALSH
Beach 23 21 19 24 23
Monument 9 11 9 11 8
Bus 25 20 27 22 24
Dinosaur 30 30 30 30 30
Elephant 22 23 20 22 21
Flower 30 28 30 30 25
Horse 22 23 24 19 25
Snow Mountain 8 13 17 7 10
TABLE VI. TOTAL CLASSIFIED IMAGES (OUT OF 30 IMAGES) IN EACH
CLASS FOR DIFFERENT TRANSFORMS, VECTOR SIZE: 225, DISTANCE
CRITERIA: MANHATTAN DISTANCE, TRAINING SET: AVERAGE OF FEATURE
VECTORS OF 5 IMAGES FROM EACH CLASS
Classes DFT DCT DST HARTLEY WALSH
Beach 24 23 16 24 26
Monument 9 9 7 11 6
Bus 24 25 26 25 28
Dinosaur 30 30 30 30 30
Elephant 21 18 21 21 19
Flower 30 30 30 30 30
Horse 20 22 22 20 22
Snow
Mountain
13 12 16 11 12
94 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 10, No. 3, March 2012

The comparisons of performances of different transforms
are shown in Fig. 3 to Fig. 6.
140
145
150
155
160
165
170
175
180
75 150 225 300 450 768
Feature vector size
No. of correctly classified images
Euclidean distance criterion
WALSH DCT DST
HARTLEY DFT

Figure 3. Performance of different transform (training set: Feature vectors
of 5 images from each class)
140
145
150
155
160
165
170
175
180
75 150 225 300 450 768
Feature vector size
No. of correctly classified images
Manhattan distance criterion
WALSH DCT DST HARTLEY DFT

Figure 4. Performance of different transform (training set: Feature vectors
of 5 images from each class)

140
145
150
155
160
165
170
175
180
75 150 225 300 450 768
Feature vector size
No. of correctly classified images
Euclidean distance criterion
WALSH DCT DST
HARTLEY DFT

Figure 5. Performance of different transform (training set: Average of
feature vectors of 5 images from each class)
140
145
150
155
160
165
170
175
180
75 150 225 300 450 768
Feture vector size
No. of correctly classified images
Manhattan distance criterion
WALSH DCT DST HARTLEY DFT

Figure 6. Performance of different transform (training set: Average of
feature vectors of 5 images from each class)
VI. CONCLUSIONS
This paper proposes to prepare the feature vector from an
image column transform and use it for image classification.
This gives considerable saving of computational time as
compared to full transform. The paper investigates the
performance of different transforms. The performance is
tested thoroughly using different criteria like distance
measure (Euclidean distance, Manhattan distance); size of
feature vector (75, 150, 225, 300, 450 and 768) and training
sets (feature vectors, average of feature vectors). Conclusion
95 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 10, No. 3, March 2012
from the results of individual class classification is given in
Table VII.
TABLE VII. BEST 3 CLASS PERFORMANCES FOR DIFFERENT CRITERIA
Training Set Similarity
Measure
Best 3 performer classes
Euclidean
Dinosaur (100%)
Horse (88.66%)
Flower (86.66%)
Feature vectors of 5
images from each
class
Manhattan
Dinosaur (100%)
Flower (95.33%)
Bus (78.66%)
Euclidean
Dinosaur (100%)
Flower (99.33%)
Elephant (80%)
Average of feature
vectors of 5 images
from each class
Manhattan
Dinosaur (100%)
Flower (100%)
Bus (85.33%)

Results also show that the training set containing average
of feature vectors, gives better results and since they are less
in numbers, the computation is fast. It is also seen that
Manhattan distance gives high performance for small feature
vector size when compared with Euclidean distance criterion.
REFERENCES
[1] M. J. Swain and D. H.. Ballard, Color indexing, International
Journal of Computer Vision, vol.7, no.1, pp.11-32, 1991.
[2] A. K. Jain and A. Vailaya, Image retrieval using color and shape,
Pattern recognition, vol.29, no.8, pp.1233-1244, 1996
[3] F. Mokhtarian and S. Abbasi, Shape similarity retrieval under
affinetransforms, Pattern Recognition, 2002, vol. 35, pp.31-41.
[4] B.S.Manjunath and W.Y.Ma, Texture feature for browsing and
retrieval of image data, IEEE Pattern Analysis and Machine
Intelligence, no. 18, vol. 8, pp. 837- 842, 1996.
[5] Z. Liang, H. Fu, Z. Chi, and D. Feng, Image Pre-Classification
Based on Saliency Map for Image Retrieval, Proc. of the IEEE
International Conference on Information, Communications and Signal
Processing, pp. 1-5, Dec 2009.
[6] F. Siraj, M. Salahuddin, and S. Yusof, Digital Image Classification
for Malaysian Blooming Flower, the IEEE Second International
Conference on Computational Intelligence, Modelling and
Simulation, (CIMSiM), pp. 33-38,Bali, Sept 2010.
[7] D. Bashish, M. Braik, and S. Bani-Ahmad, A Framework for
Detection and classification of Plant Leaf and Stem Diseases, Proc.
of the IEEE International Conference on signal and image processing
(ICSIP), pp. 113-118, Chennai Dec 2010.
[8] H.B. Kekre, T. K. Sarode, M. S. Ugale, Performance Comparison of
Image Classifier Using DCT, Walsh, Haar and Kekres Transform,
International Journal of Computer Science and Information
Security,(IJCSIS), Vol...9, No. 7, 2011
[9] M. Szummer and R. W. Picard, Indoor-Outdoor Classification,
IEEE International workshop Content based Acess of Image and
Video Databases, in conjunction with ICCV98, pp. 384-390, Jan
2009.
[10] O. Chapelle, P. Haffner, and V. Vapnik, Support vector machines
for histogram- based image classification, IEEE Transactions on
Neural Networks, vol. 10, pp. 1055-1064, 1999.
[11] S. Agrawal, N. Verma, P. Tamrakar, and P. Sircar, Content Based
Color Image Classification using SVM, in Proc. of IEEE
International Conference on Information Technology: New
Generations (ITNG), pp. 1090 1094, Las Vegas, April 2011.
[12] M. Lotfi1, A. Solimani, A. Dargazany, H. Afzal, and M. Bandarabadi,
Combining wavelet transforms and neural networks for image
classification, the IEEE Symposium on System Theory, SSST,
pp.44-48, Aug 2009.
[13] S. Sadek, A. Hamadi, B. Michaelis,and U. Sayed, Robust Image
Classification Using Multi-level Neural Networks, Proc. of the IEEE
International Conference on Intelligent Computing and Intelligent
Systems, Vol.: 4, pp. 180 183, Shanghai Dec 2009.
[14] J. Z. Wang, J. Li and G. Wiederhold, SIMPLIcity: semantic sensitive
integrated matching for picture libraries, IEEE Transactions on
Pattern Analysis and Machine Intelligence, 2001, vol.23, no.9,
pp.947-963.
[15] E. O. Brigham, R. E. Morrow, The Fast Fourier Transform,
Spectrum, IEEE, Dec. 1967, Vol. 4, Issue 12, pp. 63-70.
[16] N. Ahmed, T. Natarajan, and K. R. Rao, Discrete Cosine
Transform, IEEE Transctions, Computers, 90-93, Jan 1974.
[17] H. B.Kekre, T. K. Sarode, S. D. Thepade, Color-Texture Feature
based Image Retrieval using DCT applied on Kekres Median
Codebook, International Journal on Imaging (IJI), Volume 2,
Number A09, Autumn 2009,pp. 55-65. Available online at
www.ceser.res.in/iji.html (ISSN: 0974-0627) .
[18] S. A. Martucci, Symmetric convolution and the discrete sine and
cosine transforms, IEEE Transactions on Signal Processing, Vol. 42,
Issue 5, pp. 1038-1051, 1994.
[19] H. B.Kekre and D. Mishra, Feature Extraction of Color Images using
Sectorization of Discrete Sine Transform, IJCA Proceedings on
International Conference and workshop on Emerging Trends in
Technology (ICWET), Vol. 4, pp.:27-32, 2011.
[20] Hartley, R. V. L., A More Symmetrical Fourier Analysis Applied to
Transmission Problems, Proceedings IRE 30, pp.144150, Mar-
1942.
[21] R. P. Millane, Analytical properties of the Hartley Transform and its
Implications, Proceedings of the IEEE, Mar. 1994, Vol. 82, Issue 3,
pp. 413-428.
[22] J. L.Walsh, A Closed Set of Orthogonal Functions, American
Journal of Mathematics, vol. 45, pp. 5-24, 1923 .
[23] H. B.Kekre and D. Mishra, Density Distribution and Sector Mean
with Zero-Sal and Highest-Cal Components in Walsh transform
Sectors as Feature Vectors for Image Retrieval, International Journal
of Computer Scienece and Information Security (IJCSIS), vol.8, No.
4, 2010, ISSN 1947-5500.
[24] H. B.Kekre, Vinayak Bharadi, Walsh Coefficients of the Horizontal
& Vertical Pixel Distribution of Signature Template, In Proc. of Int.
Conference ICIP-07, Bangalore University, Bangalore. 10-12 Aug
2007.
[25] H. B.Kekre, Sudeep D. Thepade, Akshay Maloo Performance
Comparison for Face Recognition using PCA, DCT
&WalshTransform of Row Mean and Column Mean, ICGST
International Journal on Graphics, Vision and Image Processing
(GVIP), Volume 10, Issue II, pp.9-18, June 2010.
[26] H.B.Kekre, Tanuja Sarode, Sudeep D. Thepade, DCT Applied to
Row Mean and Column Vectors in Fingerprint Identification, In
Proceedings of Int. Conf. on Computer Networks and Security
(ICCNS), 27-28 Sept. 2008, VIT, Pune.
[27] Wang, J. Z., Li, J., Wiederhold, G.: SIMPLIcity: Semantics-sensitive
Integrated Matching for Picture LIbraries, IEEE Trans. on Pattern
Analysis and Machine Intelligence, vol 23, no.9, pp. 947-963, (2001).

AUTHORS PROFILE

Dr. H. B. Kekre has received B.E. (Hons.) in
Telecomm. Engineering. from Jabalpur University in
1958, M.Tech (Industrial Electronics) from IIT
Bombay in 1960, M.S.Engg. (Electrical Engg.) from
University of Ottawa in 1965 and Ph.D. (System
Identification) from IIT Bombay in 1970 He has
worked as Faculty of Electrical Engineering and then
HOD Computer Science and Engg. at IIT Bombay.
For 13 years he was working as a professor and head in the Department of
Computer Engg. at Thadomal Shahani Engineering. College, Mumbai.
96 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 10, No. 3, March 2012
Now he is Senior Professor at MPSTME, SVKMs NMIMS University. He
has guided 17 Ph.Ds, more than 100 M.E./M.Tech and several B.E./ B.Tech
projects. His areas of interest are Digital Signal processing, Image
Processing and Computer Networking. He has more than 450 papers in
National /International Conferences and Journals to his credit. He was
Senior Member of IEEE. Presently He is Fellow of IETE and Life Member
of ISTE Recently twelve students working under his guidance have
received best paper awards and six research scholars have beenconferred
Ph. D. Degree by NMIMS University. Currently 7 research scholars are
pursuing Ph.D. program under his guidance.

Tanuja K. Sarode has Received Bsc. (Mathematics)
from Mumbai University in 1996,
Bsc.Tech.(Computer Technology) from Mumbai
University in 1999, M.E. (Computer Engineering)
from Mumbai University in 2004, currently Pursuing
Ph.D. from Mukesh Patel School of Technology,
Management and Engineering, SVKMs NMIMS
University, Vile-Parle (W), Mumbai, INDIA. She has more than 10 years
of experience in teaching. Currently working as Associate Professor in
Dept. of Computer Engineering at Thadomal Shahani Engineering College,
Mumbai. She is life member of IETE, ISTE, member of International
Association of Engineers (IAENG) and International Association of
Computer Science and Information Technology (IACSIT), Singapore. Her
areas of interest are Image Processing, Signal Processing and Computer
Graphics. She has more than 100 papers in National /International
Conferences/journal to her credit.

Jagruti K. Save has received B.E. (Computer Engg.)
from Mumbai University in 1996, M.E. (Computer
Engineering) from Mumbai University in 2004,
currently Pursuing Ph.D. from Mukesh Patel School of
Technology, Management and Engineering, SVKMs
NMIMS University, Vile-Parle (W), Mumbai, INDIA.
She has more than 10 years of experience in teaching.
Currently working as Associate Professor in Dept. of
Computer Engineering at Fr. Conceicao Rodrigues College of Engg.,
Bandra, Mumbai. Her areas of interest are Image Processing, Neural
Networks, Fuzzy systems, Data base management and Computer Vision.
She has 6 papers in National /International Conferences/journal to her
credit.

97 http://sites.google.com/site/ijcsis/
ISSN 1947-5500