Vous êtes sur la page 1sur 8

An Emerging Hybrid Approach Based on

Intuitionistic Fuzzy C-Means with Intuitionistic


Particle Swarm Optimization for Microarray Data

V.Kumutha
Assistant Professor, Dept. of Computer Science
D. J. Academy for Managerial Excellence
Coimbatore, Tamil nadu, India
Email: kumuthav@gmail.com
S. Palaniammal
Professor and Head, Dept. of Science and Humanities
Sri Krishna College of Technology Coimbatore, Tamil
nadu, India
Email: splvlb@yahoo.com




AbstractDue to enormous growth in gene (gene function
and regulatory mechanisms) with the exposure in advanced
techniques, handling high dimensional data still becomes a
continuous research. Data mining plays a vital role for inferring
hidden information from voluminous data set to retrieve knowl-
edgeable information. Although fuzzy approaches are already
implemented in bio-inspirational concept, it lacks to process
efficiently in case of incomplete or inconsistent data set. This
leads to increased false alarm rate. In this proposed approach,
the degree of membership to indeterminacy is extended by
adopting the concept of generalization of fuzzy logic, which is
known as intuitionistic fuzzy logic. This paper proposes a hybrid
approach for clustering high dimensional data set using IFCM
and IFPSO to increase the detection accuracy and decrease the
false alarm ratio considerably. To find similarity among objects
and cluster centers intuitionistic based similarity measure is
used. Intuitionistic fuzzy particle swarm optimization optimizes
the working of the Intuitionistic FCM. Experimental results of
proposed approach shows better results when compared with the
existing methods.
Index TermsIntuitionistic fuzzy, FCM, IFPSO, gene expres-
sion data

I. INTRODUCTION

Clustering is termed as an unsupervised learning, applicable
in various fields [1]. Clustering methods tend to organize
data points into groups such that data points belong to one
cluster are similar whereas data points in different clusters are
distinct. These methods have applications in various areas such
as bioinformatics, taxonomy, image processing, information
retrieval, data mining etc. [2]. Hard (or crisp) clustering algo-
rithms have strict boundaries that require each data point of the
data set to one and only one cluster specifying the membership
value as 0 or 1. Several techniques have been developed
to cluster the input data in past decades [4, 5]. Hard, Soft,
Probabilistic and Possibilistic are the various classification of
clustering methods. Recent research is focused on Fuzzy c-
means (FCM), a soft clustering approach that generates fuzzy
partitions for a given data set.
Fuzzy clustering methods allows one data point to lie in more
than one cluster representing overlapping boundaries between
clusters by specifying the membership value between 0 and
1. The Fuzzy c-means (FCM) is one of the popular fuzzy
clustering methods which use fuzzy set theory [3]. FCM
cluster data according to an objective function and several
constraints. Fuzzy c-means is an effective algorithm, whereas
the random selections of center point make iterative process
falling into local optima solution hence different initializations
may lead to different results.
One of the major challenges posed by real-world clustering
applications is dealing with uncertainty in the localization of
the feature vectors. In gene expression data, the number of
samples is very limited while the volume of genes is very
large; such data sets are very sparse in high-dimensional gene
space. Also most of the genes collected may not necessarily
be of interest. A small percentage of genes which manifest
meaningful sample phenotype structure are buried in large
amount of noise. Intricacy arises in choosing informative genes
when there is uncertainty about which genes are relevant[16].
To handle this problem intuitionistic fuzzy approach is used.
Intuitionistic Fuzzy Sets (IFSs) [6] are generalized fuzzy sets,
which are useful in coping with the hesitancy originating
from imperfect or imprecise information. Membership and
non-membership value are elements involved in this sets. The
degree of membership denotes the validity or trueness of the
element to the set, whereas the non-validity of falseness of the
element to the set denotes the non-membership value. Apart
from validity and non-validity of the element, another element
named hesitancy or indeterminacy or uncertainty poses diffi-
culty in determining the validness of the membership of the
element to the group. Recent research indicates that applying
intuitionistic fuzzy sets to high dimensional data provides
optimal clustering results.
The rest of this paper is organized as follows: Section 2
discusses the related works. Section 3 presents materials and
methods. Section 4 summarizes the experimental analysis
performed with benchmark data sets. The conclusion of the
proposed work is given in Section 5.
#319212013 IEEE
Proceedings of International Conference on Optical Imaging Sensor and Security, Coimbatore, Tamil Nadu, India, July 2-3, 2013
i=1
0 <

ij
ij
II. LITERATURE REVIEW degree of association or membership function of the i
th
object
th

There are many works in literature related to Intuitionistic
Fuzzy c-means. Hesam Izakian, et al.[7] presented a hybrid
fuzzy clustering method based on FCM and FPSO, which
makes use of the merits of both the algorithms. The FCM-
with the j cluster. The characters of are as follows:

ij
[0, 1] i = 1, 2, ....d (1)

ij
= 1, j = 1, 2, ...p (2)
FPSO algorithms applies FCM to the particles in the swarm
every number of iterations/generations such that the fitness

p
i=1

ij
< 1 ij = 1, 2, ...d (3)
value of each particle is improved. The hybrid FCM-FPSO

The objective function of FCM algorithm is to minimize the
obtained superior results than others and it can escape from
local optima. Also, the experimental results show that when
the size of data set (number of objects or clusters) is small,
the FPSO surpasses FCM but with increasing the size of data
set, the outcome of FCM was improved than FPSO.
Eq. 2:


where

J
m
=
p

m
d
2
1 > m < (4)

i=1 j=1 ij ij

d
ij
= |o
i
~ z
j
| (5)
E. Mehdizadeh, et al.[8] presented an efficient hybrid method
particularly for large data sets based on fuzzy particle swam
optimization (FPSO) and Fuzzy C-Means (FCM) algorithms,
meant to resolve the fuzzy clustering problem. The perfor-
where m(m > 1) is a scalar termed the weighting exponent
and controls the fuzziness of the resulting clusters and d
ij
is
the Euclidean distance from object o
i
to the cluster center z
j
.
The z
j
, centroid of the j
th
cluster, is acquired using Eq. (6).
mance was improved by seeding the initial swarm with the
result of the c-means algorithm. The experiment indicates that


Z
j
=

p
i=1

m
O
i

(6)


the computation times and solution quality of FPSO for large
data sets was better than FCM.


Algorithm 1.Fuzzy c-means
i=1

m

In M. Mir et al.[9], PSO algorithm and fuzzy methods were
combined to avoid local peaks and find global optimal solu-
tion,. This approach uses global search capacity to overcome
FCM deficits. It finds optimal location of clusters centers for
input data set and finally finds the member components of
each cluster.
Erol Egrioglu et al. [10] presented a hybrid approach, in
which fuzzy c-means clustering method and artificial neural
1.Select m (m>1) and initialize the membership function
values ,
ij
i=1,2,...p, j=1,2,..d
2.Compute the cluster centers Z
j
, j = 1,2,..., d by using Eq.
(6)
3.Compute Euclidian distance, d
ij
, i = 1,2,..., p; j=1,2,..., d
4.Update the membership function,
ij
i = 1, 2... p; j=1,2,...,
d by using below equation

1

networks were used in fuzzy time series to get more accurate
forecasts. Fuzzification step in FCM removes problems caused

ij
=

d
dij
2
m~
1
(7)

by partition of discourse of universe and fuzzy relationships


defined by artificial neural networks avoids use of difficult
matrix operations.
Runkler et al.[11] presented two methods for minimizing
the reformulated objective functions of the fuzzy c-means
clustering model by particle swarm optimization: PSO-V and
PSO-U. Every particle in PSO-V represents a component of a
cluster center, and in PSO-U each particle signifies an un-
scaled and unnormalized membership value. The approach
was compared with alternating optimization and ant colony
optimization methods.

III. MATERIALS AND METHODS

1) Fuzzy C-Means algorithm: The Fuzzy C-Means algo-
rithm (FCM) is an iterative algorithm that finds clusters in data
and which uses the concept of fuzzy membership, instead of
assigning an object to a single cluster, each object will have
different membership values on each cluster. It partitions set
of n objects in R
d
dimensional [12] space into d (1 d p)
O=o
1
,o
2
,...,o
p
fuzzy clusters with Z=z
1]
,z
2
,...,z
p
cluster
centers or centroids. A fuzzy matrix with p rows and d
columns is defined for fuzzy clustering of objects in which
p is the number of data objects and d is the number of
clusters,
ij
, the element in the i
th
row and j
th
column in ,
point out the
k=1
(
dik
)

5.If not converged, go to step 2.

2) Intuitionistic Fuzzy c-means Clustering(IFCMC): The
Intuitionistic Fuzzy Set (IFS) was defined as an extension of
the ordinary Fuzzy Set [12],[13]. As opposed to a fuzzy set
in X, given by:

A = {(x,
A
(x))|x X } (8)

where
A
(x) [0,1] is the membership function of the fuzzy
set A, an intuitionistic fuzzy set B is given by:

B = {(x,
B
(x)),
B
(x)x X } (9)
where
B
(x) [0,1] and
B
(x) [0,1] are such that:

0 >
B
(x),
B
(x) > 1 (10)

and
B
(x),
B
(x) [0,1] denote degrees of membership and
non-membership of x B, respectively.
For each intuitionistic fuzzy set B in X, hesitation margin
(or intuitionistic fuzzy index) of x B is given by:

B
(x) = 1 ~ (x) ~
B
(x) (11)

which expresses a hesitation degree of whether x belongs to
B or not. It is obvious that 0>
B
(x)> 1 , for each x X.
To describe an intuitionistic fuzzy set completely, it is
necessary to use any two functions from the triplet:
membership func- tion; non-membership function; and
hesitation margin.
The fuzzy membership value and the cluster center are updated
using (12) and (13).

U
ik
= U
ik
+
ik
(12)

1~
m


j=1
.


X (t + 1) = X (t) + V (t + 1) (19)

where, X and V are position and velocity of particle respec-
tively. w is inertia weight, c1 and c2 are positive constants,
called acceleration coefficients which control the influence of
pbest and gbest on the search process, P is the number of
particles in the swarm, r1 and r2 are random values in range
k=1
U
ik
x
ik
i
=
k=1
U
ik


(13)
[0, 1].
4) Fuzzy PSO Algorithm: A particle swarm optimization
The objective function for IFCMC is
with fuzzy set theory is called fuzzy particle swarm opti-
J =
c

n
u
m
d(x
k
, v
2
)x +

e
1~
i
(14)
mization (FPSO) [15]. Using fuzzy relation between variables,

with m=2
i=1 k=1 ik i i=1 i
FPSO redefines the position and velocity of particles and its
also applied for clustering problem. In this method X is the
Algorithm 2. Intuitionistic FCM
1.Select m > 1
2.Initialize membership and non-membership degree of data
position of particle, the fuzzy relation for the set of data
objects O=o
1
,o
2
,o
n
, to set of clusters centers Z=z
1
,z
2
,.z
n
can be expressed as follows:
objects.
3.Detect initial centroids by selecting c random intuitionistic

11



...
1c



(20)

fuzzy object.
4.Fuzzy matrix



ij
=




S
1
(x
j
~
i
)
1






(15)

X =


... ... ...

n1
...
nc


Here,
ij
is the membership function of the i
th
object with

c 1
i=1
(S
1
(x
i
~
i
)
1~ m
5.Calculate centroid
the j
th
cluster with constraints

i
[0, 1] i = 1, 2, ..n j = 1, 2, ..c (21)

S(A, B) =
n


c
j=1

ij
= 1 i = 1, 2, ..c (22)
P

A
(x
j
)
B
(x
j
) +
A
(x
j
)
B
(x
j
) +
A
(x
j
)
B
(x
j
)
j=1

therefore it is known that the position matrix of each particle

n
max(
P

2
(x
j
) +
2
(x
j
) +
2
x
j
)

(16)
is the same as fuzzy matrix in FCM algorithm. Also the
velocity of each particle is stated using a matrix with the size

A
j=1

6.Update the centroid
A A


x
ik

n rows and c columns, the elements of which are in range
between -1 and 1.
The equations (18) and (19) are used for updating the positions
Z
j
=
n ik
(17)

and velocities of the particles based on the matrix.

k=1

n

k=1

ik
7.Update the membership, non-membership and hesitation
value
3) Particle Swarm Optimization PSO: Particle swarm opti-
mization (PSO) is a population-based stochastic optimization
technique inspired by bird flocking and fish schooling [14]
which is based on iterations or generations. The algorithmic
flow in PSO starts with a population of particles whose
After updating the position matrix, it may violate the con-
straints given in (1) and (2) since it is compulsory to normalize
the position matrix. First all the negative elements in matrix are
set to zero. If all elements in a row of the matrix are zero, they
need to be reevaluated using series of random numbers within
the interval between 0 and 1, and then the matrix undergoes
the following transformation without violating the following
constraints:
positions represent the potential solutions for the intentionl

11

1c

problem, and search space are randomly initialized by the
velocity. The search for optimal position is performed by

X normal =

c
j=1
1j

c

.

1j
.
.


(23)
.
. .
updating the particle velocities and positions for each iteration.


n1

nc

In each iteration, the fitness value of each particles position is

j=1

1j

j=1

1j

c c
determined using a fitness function. Using two best positions,
personal best position and global best position the velocity
of each particle is updated. The personal best position, pbest,
This technique uses the following equation as fitness function
for evaluating the solutions
K

is the best position the particle has visited and gbest is the
best position the swarm has visited since the first time step.
f (X ) =
J
m
(24)
A particles velocity and position are updated as follows.

V (t + 1) = wV (t) + c1r1(pbest(t) ~ X (t))
+c2r2(gbest(t) ~ X (t)); k = 2, 3, .P (18)
Here, K is a constant and J
m
is the objective function of FCM
algorithm. The smaller is Jm, the better is the clustering effect
and the higher is the individual fitness f(X). The termination
condition in this method is the maximum number of iterations

ij
or no improvement in gbest in a number of iterations. The
FCM algorithm is quicker than the FPSO algorithm because
it need not as much of function evaluations, but it normally
go down into local optima. FCM algorithm incorporated with
the particles using hesitation degree which is also known as
indeterministic degree.

11
...
1c

... ... ... (26)

FPSO algorithm to form a hybrid clustering algorithm called
FCM-FPSO which maintains the merits of both FCM and PSO
X =

n1
...
nc

algorithms.
Algorithm 3. Fuzzy PSO
Input : Dataset
Output : Objective Values
Step 1. Initialize the parameters including population size P,
c
1
, c
2
, w and the maximum iterative count.
Step 2. Create a swarm with P particles (X, pbest, gbest and
V are n c matrices).
Step 3. Initialize X, V, pbest for each particle and gbest for
the swarm.
Step 4. Calculate the cluster centers for each particle using
Algorithm 4. IFPSO algorithm
Input : Dataset
Output : Objective Values
Step 1. Initialize the parameters including population size P,
c1, c2, w and the maximum iterative count.
Step 2. Create a swarm with P particles (X, pbest, gbest and
V are n c matrices).
Step 3. Initialize X, V, pbest for each particle and gbest for
the swarm.
Step 4. Calculate the cluster centers for each particle using by
Eq. (17)

Z
j
=

n
i=1

m
O
i
m

(25)

Step 5. Calculate the fitness value of each particle using
i=1

ij
Step 5. Calculate the fitness value of each particle using Eq.
K
f (X ) =
J
m
(27)
(24)
Step 6. Calculate pbest for each particle.
Step 7. Calculate gbest for the swarm.
Step 8. Update the velocity matrix for each particle using Eq.
(18)
Step 9. Update the position matrix for each particle using Eq.
(19)
Step 10. If terminating condition is not met, go to step 4.


IV. PROPOSED APPROACH
Dataset Preprocessing Applying IFPSO

Applying IFCM Generating cluster patterns

Fig. 1. Framework of Proposed Work

A. Intuitionistic Fuzzy Particle Swarm Optimization (IFPSO)
Existing approaches works fine for the data-sets which are
not corrupted with noise but if the data set is noisy or distorted
then it wrongly classifies noisy pixels because of its abnormal
feature data and results in an incorrect membership and
improper clustering. The above said problem was also faced by
the fuzzy particle swarm optimization approach. To overcome
the problem of abnormal features that exist among the particle
clustering can be overwhelmed by introducing the concept of
intuitionistic fuzzy based particle swarm optimization which is
the generalization of fuzzy based particle swarm optimization.
In this approach each particle is concerned not only with the
membership function but the indeterministic degree also taken
into consideration for handling the abnormality problem. The
abnormality problem arises due to the inconsistency of the
particles position information.
This proposed approach IFPSO introduces another degree into
the consideration for handling the uncertainty problem among
Step 6. Calculate pbest for each particle.
Step 7. Calculate gbest for the swarm.
Step 8. Update the velocity matrix for each particle using Eq.
(18)
Step 9. Update the position matrix for each particle using Eq.
(19)
Step 10. If terminating condition is not met, go to step 4.

B. Hybrid Intuitionistic Fuzzy C Means and Intuitionistic
Fuzzy Particle Swarm Optimization
The IFCM algorithm is quicker than the IFPSO algorithms
because it uses few function evaluations, but it normally go
down into local optima. Rather, IFCM algorithm incorporated
with IFPSO algorithm to form a hybrid clustering algorithm
called IFCM-IFPSO which maintains the merits of both IFCM
and IPSO algorithms. In IFCM-IFPSO algorithm, IFCM is
applied to the particles in the swarm every number of itera-
tions/generations such that the fitness value of each particle is
improved. The algorithm 5 illustrate hybrid IFCM-IFPSO.
Algorithm 5. IFCM-IFPSO algorithm
Input : Dataset
Output : Objective Values
Step 1. Initialize the parameters of IFPSO and IFCM including
population size P, c1, c2, w, and m.
Step 2. Create a swarm with P particles (X, pbest, gbest and
V are n c matrices).
Step 3. Initialize X, V, pbest for each particle and gbest for
the swarm
Step 4. IFPSO algorithm
4.1 Calculate the cluster centers for each particle using by (17)
4.2 Calculate the fitness value of each particle using by (24)
4.3 Calculate pbest for each particle.
4.4 Calculate gbest for the swarm.
4.5 Update the velocity matrix for each particle using by (18)
4.6 Update the position matrix for each particle using by (19)


Fig. 1. Results for Iris data set


4.7 If terminating condition is not met, go to step
Step 5. IFCM algorithm
5.1 Compute the cluster centers j z , j = 1,2,..., c , by using
(13)
5.2 Compute Intuitionistic based Similarity measurement pro-
posed by (x, y)

Fig. 2. Results for Yeast data set
r
1

D(A, B) = 1 ~
2
n
X


i=1

(
A
(x
i
) +
B
(x
i
))
2



Fig. 3. Results for Colon cancer data set
+(
A
(x
i
) +
B
(x
i
))
2
+ (
A
(x
i
) +
B
(x
i
))
2
(28)

In this A refers to the object and B refers to the centroid of
the ith cluster.
5.3 Update the membership function
ij
, i = 1,2n j=1,2,..., c
by using (15)
5.4 Calculate pbest for each particle.
5.5 Calculate gbest for the swarm.
5.6. If IFCM terminating condition is not met, go to step 5.
Step 6. If IFCM-IFPSO terminating condition is not met, go
to Step 4.


V. EXPERIM ENTAL RESULTS
The algorithms discussed in the previous section have been
implemented using MATLAB. For evaluating the performance
of the proposed work, four different benchmark data sets are
taken into consideration.

A. Parameter settings
The optimized performance of the IFPSO and IFCM-IFPSO,
fine tuning has been executed and best values for their
parameters are chosen. The experimental results based on
these algorithms achieve best under the following settings:
c1,c2, the value is 2.0 - population is 10, and weight values
are: the minimum of weight value is 0.1 and maximum of
value is 0.9 and weighting exponent component m value is 2
which is common to all the algorithms. The FCM and IFCM
terminating condition is: if the previous iteration of centroid
values and current iteration of centroid values are same,
terminate the algorithm. The FPSO and IFPSO terminating
condition is till it reaches the maximum iteration value when
the algorithm cannot improve the gbest in 1000 consecutive
iterations. Also the IFCM-IFPSO terminating condition is met,
when the algorithm cannot improve the gbest in 2 consecutive
iterations.
The experimental results of over 100 independent runs for



Fig. 4. Results for Leukemia data set


TABLE I
GE N E EX P R E S S I O N DATA S E
T

Data set No. of samples No. of genes
Yeast 79 2467
Colon cancer 62 2000
Leukemia 72 7129


TABLE II
BE N C H M A R K DATA S E T

Data set No. of attributes No. of instances
Iris 4 150


IFCM and 10 independent runs for IFPSO and IFCM-IFPSO
are shown in the figures 2,3,4 and 5. The figures shows the
objective function values. As shown in the figures 2 to 7, the
hybrid IFCM-IFPSO acquires better quality results than others
in all of data sets and it can flee from local optima. The below
figure shows the various generation runs of proposed IFCM-
IFPSO algorithm with the mean and best score value (Figures
6,8 & 10).
The figure 6 shows Average cumulative change in value of
the fitness function over 50 generations less than 1e-006 and


Fig. 5. Average cumulative change for 129 generations



Fig. 6. Mean and best score value for 129 generations


constraint violation less than 1e-006, after 129 generations.
Final best point: [0.99999 0.99998]
Figure 8 shows Average cumulative change in value of the
fitness function over 50 generations less than 1e-006 and
constraint violation less than 1e-006, after 120 generations.
Final best point: [1.0004 1.0009]
Figure 10 shows Average cumulative change in value of the
fitness function over 50 generations less than 1e-006 and
constraint violation less than 1e-006, after 116 generations.
Final best point: [0.99997 0.9999]

VI. CONCLUSION
The clustering of high dimensional gene expression is a
very important problem in data mining. Many algorithms have
been proposed for clustering. The fuzzy c-means algorithm is
sensitive to initialization and is easily trapped in local optima.
On the other hand, the fuzzy particle swarm algorithm is a
global stochastic tool which could be implemented and applied
easily to solve various function optimization problems, or the
complications that can be converted to optimization problem
Fig. 7. Average cumulative change for 116 generations




Fig. 8. Mean and best score value for 116 generations


tasks. Both of them fail to handle the uncertainty condition
and they left the concept of indeterminacy when there is a
presence of vagueness or incompleteness in clustering dataset.
In this paper, in order to overcome the shortcomings of
the fuzzy c-means and fuzzy particle swarm optimization an
algorithm is proposed to overcome the indeterminacy, which
leads to a noticeable improvement in the performance of
clustering objects. Instead of just considering the membership
value of each object in the cluster, the proposed approach
takes into account the indeterministic value as an important
factor in the case of incompleteness in the data set clustering.
Experimental results over six well-known data sets, Iris, Glass,
Cancer, and Wine, indicates that the suggested hybrid method
is efficient and can reveal very encouraging results in term of
quality of solution found. A new hybrid method combining
Intuitionistic fuzzy c-means and Intuitionistic fuzzy Particle
swarm optimization algorithm have been applied successfully
for real world data sets. The computational results show that
the performance of the proposed algorithm is better than the
other existing algorithms.
REFERENCES

[1] M.S.Chen, J.Han, P.S. Yu. Data mining: An Overview from Database
Perspective, TKDE, 1996.
[2] A.K. Jain, M.N. Mutry, and P.J. Flynn, Data Clustering: A Review, ACM
Computing Surveys, vol. 31, no. 3, pp. 264-323, 1999.
[3] Zadeh, L.A.: Fuzzy sets. Information Control 8, 338-356,1965
[4] A. K. Jain, M. N. Murty, P. J. Flynn. Data clustering: a
review.ACM Computing Surveys, 31(3): 264-323, 1999.
[5] B. S. Everitt, S. Landau, M. Leese. Cluster analysis. US:OxfordUniversity
Press, 2001.
[6] Atanassov, K. Intuitionistic fuzzy sets: past, present and future. In:
Proceedings of the 3rd Conference of the European Society for
Fuzzy Logic and Technology. 12-19, 2003.
[7] Hesam Izakian, Ajith Abraham, Fuzzy C-means and fuzzy swarm
for fuzzy clustering problem, Expert Systems with Applications, 38,
1835-
1838, 2011.
[8] E. Mehdizadeh, S. Sadi-Nezhad and R. Tavakkoli-Moghaddam, Optimiza-
tion Of Fuzzy Clustering Criteria By A Hybrid PSO And Fuzzy C-Means
Clustering Algorithm Iranian Journal of Fuzzy Systems Vol. 5, No. 3, pp.
1-14, 2008.
[9] M. Mir G. Tadayon Tabrizi, Improving Data Clustering Using
Fuzzy Logic and PSO Algorithm 20th Iranian Conference on
Electrical En- gineering, (ICEE2012), May 15-17,2012, Tehran, Iran
[10] Erol Egrioglu, Cagdas Hakan Aladag, Ufuk Yolcu, Fuzzy time
series forecasting with a novel hybrid approach combining fuzzy c-
means and neural networks, Expert Systems with Applications, 40, 54-
857, 2013.
[11] Runkler, T. A., and Katz, C., Fuzzy clustering by particle swarm
optimization. In 2006 IEEE international conference on fuzzy
systems (pp. 601-608), 2006, Canada.
[12] K.T. Atanassov, Intuitionistic Fuzzy Sets Theory and Applications,
Studies in Fuzziness and Soft Computing, 1999, Phisica-Verlag.
[13] S. Shanthi, V. Murali Bhaskaran, Intuitionistic Fuzzy C-Means and
Decision Tree Approach for Breast Cancer Detection and Classification,
European Journal of Scientific Research Vol.66 No.3 (2011), pp. 345-351
[14] Kennedy, J., and Eberhart, R., Swarm intelligence. Morgan Kaufmann,
2001
[15] Pang, W., Wang, K., Zhou, C., & Dong, L., Fuzzy discrete
particle swarm optimization for solving traveling salesman problem. In
Proceed- ings of the fourth international conference on computer and
information technology (pp. 796-800). 2004, IEEE CS Press.
[16] Daxin Jiang Chun Tang Aidong Zhang Cluster Analysis for Gene
Expression Data: A Survey IEEE Trans. Knowl. Data Eng. 16(11): 1370-
1386 2004.