Vous êtes sur la page 1sur 7

Abstract In competitive electricity markets, the

distribution service providers have been given new degrees of


freedom in formulating dedicated tariff offers to be applied to
properly defined customer classes. For this purpose, they may
take advantages from identifying the consumption patterns of
their customers and grouping together customers exhibiting
similar load diagrams. In this paper, we report on results
obtained by using various unsupervised clustering algorithms
(modified follow-the-leader, k-means, fuzzy k-means and two
types of hierarchical clustering) and the Self Organising Map
to group together customers having a similar electrical
behaviour. The customer classification is investigated in the
paper by considering a set of 234 non-residential customers.
Each customer is characterised by a set of values extracted
from the load diagrams in a given loading condition. The
effectiveness of the classifications obtained with the various
algorithms has been compared in terms of a set of adequacy
indicators based on properly defined metrics, some of which
have been originally developed by the authors. The results
show that the modified follow-the-leader and one type of
hierarchical clustering exhibit better characteristics than the
ther algorithms in terms of adequacy. o

Index Terms Load diagrams, Customer Classification,
Clustering, Follow-the-leader, Self Organising Map, k-means,
Fuzzy k-means, Hierarchical clustering.
I. INTRODUCTION
After the introduction of competitive electricity markets in
several countries, the electricity distribution sector has been
subject to a number of significant changes, resulting in an
increased number of operators playing in a business system
[1],[2]. Moreover, the evolution of the electricity
distribution service regulation has given the electricity
distribution providers new possibilities of formulating
dedicated tariffs, to be applied to different classes of
electricity customers under a set of constraints (price or
revenue caps) imposed by the regulation.
For the definition of suitable tariff structures, most
existing classifications based on the type of activity are
scarcely correlated to the actual evolution of the electrical
consumption [3]. Distribution providers may take benefits
from identifying the consumption patterns of their
customers and grouping together customers exhibiting
similar load diagrams [1],[2],[4],[5]. Practically, the whole
set of customers can be preliminarily partitioned into
macro-categories on the basis of global criteria (e.g.,
residential, non-residential, lighting, etc.). Inside each
macro-category, a refined classification can be performed
by taking into account the actual electrical behaviour of the
consumers. This classification can be performed by means
of suitable clustering techniques, resulting in a set of
representative load profiles associated to each customer
class [6].
Application of Clustering Algorithms and Self
Organising Maps to Classify Electricity Customers

Gianfranco Chicco, Roberto Napoli and Federico Piglione

Starting from a given number of daily load diagrams
extracted from monitoring the customers load in a
specified loading condition (e.g., working days during a
given seasonal period), it is possible to identify a single
load diagram for each customer by properly averaging the
load diagram data. The representative load diagram of
each customer can then be built by normalising these data
with respect to a reference power. The maximum value of
the load diagram can be assumed as reference power, such
that all values of the representative load diagram lie in the
(0,1) range.
The set of representative load diagrams of the selected
customers can be used to define a number of customer
classes according to a load shape criterion. For this
purpose, suitable clustering technique is needed to form the
customer classes on the basis of the load diagrams.
Different types of clustering techniques have been
proposed in the literature for assisting load profiling,
including applications of classical clustering and statistical
techniques [3],[7],[8],[9],[10], neural networks
[10],[11],[12], and fuzzy logic [13]. In this paper, we
investigate on the effectiveness of various clustering
algorithms, some of which have been adapted by the
authors to fit the needs of the customer classification, and
compare their performance by using a set of non-residential
load diagrams. We take into account algorithms based on a
modified follow-the-leader strategy, k-means, fuzzy k-
means, two types of hierarchical clustering and the Self
Organising Map (SOM).
The clustering results lead to the formation of the class
representative load diagrams for each customer class, built
on the basis of the load diagrams aggregated in the same
customer class, which may represent the load profiles
adopted for tariff purposes. The number and composition
of the customer classes depend on the clustering algorithm
used. In this paper, the effectiveness of the clustering
algorithms is compared by using properly defined metrics,
able to rank the clustering adequacy, some of which have
been originally developed by the authors.

The paper is structured as follows. Section II reviews the
clustering techniques used. Section III deals with adequacy
assessment of the clustering algorithms and introduces four
The authors are with the Dipartimento di Ingegneria Elettrica
Industriale, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129
Torino, Italy (e-mail gianfranco.chicco@polito.it, roberto.napoli@polito.it,
federico.piglione @polito.it)
0-7803-7967-5/03/$17.00 2003 IEEE
Paper accepted for presentation at 2003 IEEE Bologna Power Tech Conference, June 23th-26th, Bologna, Italy


( ) ( )
( ) ( )
( ) ( )
(
t s
t s
t s
W
d
n n
n n
t s
d
x x ,
2
) , (
+
= ) (2)
adequacy indicators belonging to two types of metrics.
Section IV shows the results of applying the clustering
techniques to a real case, and presents a comparison among
these techniques in terms of adequacy. The last section
contains the concluding remarks.
wherex
(s)
andx
(t)
are the centres of the two clusters.
Fig. 2 shows the hierarchical tree obtained by using the
Ward criterion. The comparison of the two binary trees
shows that the average distance criterion forms large
clusters of similar samples and rejects the very dissimilar
ones in small or singleton cluster, whereas the Ward
criterion, that weighs the increase of the within-cluster
sums of squares, prevents the formation of large clusters.
II. CLUSTERING TECHNIQUES
Starting from an initial set of M samples, we compare the
performance of different procedures to group the load
diagrams on the basis of their distinguishing features. We
consider four unsupervised clustering algorithms
(hierarchical clustering, k-means, fuzzy k-means, and a
modified follow the leader procedure) and the uni-
dimensional Self Organising Map. Their characteristics are
briefly illustrated in the sequel.
0
1
2
3
4
5
6
7
Fig. 1. Hierarchical clustering using the average distance criterion.

A. Hierarchical clustering
In hierarchical clustering [14] there are initially M
singleton clusters, as much as the number of sample data.
Firstly, a M x M similarity matrix is built using some
distance criterion (e.g., the Euclidean norm). Afterwards,
the M samples are grouped into binary clusters using a
linkage criterion based on the previously computed
similarity matrix. The process is iteratively repeated by
merging the clusters of each level into bigger ones at the
upper level until all samples are grouped in one cluster.
The history of the process is kept in order to form a binary
tree structure, whose root is the cluster that contains the
whole data set.

The linkage criterion measures the similarity between
clusters at each level and determines the cluster formation
at the upper level. We use two different linkage criteria
average distance and Ward [15].
With the average distance criterion, the grouping of two
clusters s and t is decided by means of the average distance
d
A
(s,t) between all pairs of objects in the two clusters:

( ) ( )
( ) ( )
(
( ) ( )

= =
=
s t
n
i
n
j
t
j
s
i
t s
A
d
n n
t s
d
1 1
,
1
) , ( x x ) (1) 5
0
10
15
20
Fig. 2. Hierarchical clustering using the Ward linkage criterion.
where and are the number of objects in s and t,
and are the generic objects in s and t, and the
distance d(.) is evaluated by using the Euclidean norm.
( ) s
n
x
( ) t
n
( ) s
i
x
( ) t
j
B. k-means
The classical k-means clustering [16] groups a data set of
x
(m)
(m = 1, , M) samples in k = 1, , K clusters by
means of an iterative procedure. A first guess is made for
the K cluster centres c
(k)
(usually chosen in a random
fashion among the samples of the data set). The K centres
classify the samples in the sense that the sample x
(m)

belongs to cluster k if the distance || x
(m)
- c
(k)
|| is the
minimum of all the K distances. The estimated centres are
used to classify the samples into clusters (usually by
Euclidean norm) and their values c
(k)
are recalculated. The
procedure is repeated until stabilisation of the cluster
centres. Clearly, the optimal number of clusters is not
known a priori and the clustering quality depends on the
value of K.
The hierarchical tree of Fig. 1 is obtained by grouping
the load profiles of the data set by this method. The
samples are represented in the horizontal axis, whereas the
distances between clusters are in the vertical axis. The
height of the vertical branches represents the distance
between each pair of merged clusters. The clusters are then
constructed by choosing in the binary tree the maximum
distance admissible, or by selecting directly the distance
corresponding to desired number of clusters.
In the Ward linkage criterion, the clusters are formed in
order to minimise the increase of the within-cluster sums of
squares. The distance d
W
(s,t) between two clusters s and t is
then measured as the increase of these sums of squares if
the two clusters were merged:


C. Fuzzy k-means
Fuzzy k-means clustering [17] is rather similar to k-means
clustering, but each sample x
(m)
has a grade of membership
a
mk
to each cluster k. The procedure is initialised by
choosing K samples c
(k)
as cluster centres and assigning a
membership degree with the K centres to the M samples of
the data set. Each cluster centre c
(k)
is then updated by
replacing his value with the fuzzy mean of all the samples
with regard to the cluster k:
(3)
( ) ( )
1
1
2
1
2

= =
|
|
.
|

\
|
|
|
.
|

\
|
=

M
m
mk
M
m
m
mk
k
a a x c
z
The procedure is repeated until stabilisation of the cluster
centres. Again, the number of clusters and membership
criteria are user-defined parameters, which have to be
tuned by trial-and-error.
D. Modified follow-the-leader
Lets consider the follow the leader procedure introduced
in [18], which does not require initialisation of the number
of clusters and uses an iterative process to compute the
cluster centres. A first cycle of the algorithm sets the
number K of clusters and the number of patterns n
(k)

belonging to each cluster k = 1, ..., K by using a follow the
leader approach depending on a distance threshold . The
subsequent cycles refine the clusters, by possibly
reassigning the patterns to closest clusters. The procedure
stops when the number of patterns changing clusters in a
single cycle is zero. The process is essentially controlled by
the distance threshold , which has to be chosen by a trial-
and-error approach.
This procedure has been modified by the authors to fit
the needs of the proposed classification, with two different
objectives. The first objective is to keep into account the
different dispersion of the data in the input vector. For this
purpose, the Euclidean metric used in the original
algorithm has been modified by introducing for each index
a weighting factor
2
2
/
h
, where is the variance of
the h
2
h

th
feature computed from all the load diagrams in the
initial population and
2
is the average value of the
variance for h = 1,, H. As such, the impact of the
indices having a high variance is amplified in the
computation of the weighted Euclidean distance.
2
h

The other objective deals with obtaining a final set of


class representative load diagrams in which each diagram
has a unity maximum value. Since this procedure updates
the cluster centres automatically, a re-normalisation is
needed at each step of the procedure. Each time a new
diagram is added to an existing cluster, it is important to
keep into account the weight of each component, given by
its reference power. For m = 1,, M, we assume that at the
i
th
cycle the pattern y
(m)
of the m
th
customer with reference
power P
(m)
has to be assigned to the existing q
th
cluster
with cluster centre r
i
(q)
and reference power R
i
(q)
. the new
cluster centre is then obtained by using a dedicated
procedure that computes the auxiliary vector z = R
i
(q)
r
i
(q)
+
P
(m)
y
(m)
, extracts the maximum value
, and updates the cluster centre
r
{ H h z z
h
,..., 1 , max
max
= =
( ) ( ) m q
i
q
i
P R r =
0 0
'
{ H h z z
h
,..., 1 , ' max '
max
= =
( )
}
}
i
(q)
= z/z
max
, its reference power R
i
(q)
= z
max
and the number
of cluster components n
(q)
= n
(q)
+ 1. Updating a cluster
centre may occur in any cycle i > 1, whenever a cluster
loses one of its components in favour of another cluster. In
this case, the cluster centre which gains a new component
is updated in the same way shown above, while the centre
of the cluster q
0
which loses the component y
(m)
with
reference power P
(m)
is updated by computing the auxiliary
vector , its maximum value
, and by updating
( ) ( ) m
y
max
' '
0
z
q
i
z r =
( )
'
0
z R
q
i
= , and .
max
( ) ( )
1
0 0
=
q q
n n
E. Self Organising Maps
The Kohonen SOM [19] is an unsupervised neural network
that projects a N-dimensional data set in a reduced
dimension space (usually a uni-dimensional or bi-
dimensional map). The projection is topological preserving,
that is, the proximity among objects in the input space is
approximately preserved in the output space. The SOM is
composed of a pre-defined string or grid of N-dimensional
units c
(i)
that forms a competitive layer, where only one unit
responds at the presentation of each input sample x
(m)
. The
activation function is an inverse function of || x
(m)
c
(i)
||, so
that the unit that is closest to x
(m)
wins the competition. The
unit is then updated according to the relationship winning

( )
) ( ) ( ) ( ) ( i
old
m i
old
i
new
c x c c + = (4)
where is the learning rate. However, unlike other
competitive methods, each unit is characterised also by its
position in the grid. Therefore, the learning algorithm
updates not only the weights of the winning unit but also
the weights of its neighbour units in inverse proportion of
their distance. The neighbourhood size of each unit shrinks
progressively during the training process, starting with
nearly the whole map and ending with the single unit. In
this way, the map auto-organises so that the units that are
spatially close correspond to patterns that are similar in the
original space. These units form receptive areas, named
activity bubbles, that are encircled by units (dead units) that
never win the competition for the samples of the training
data set. However, the dead units are trained to form a
connection zone amongst the activity bubbles and could
win for samples not included in the training set.
The SOM performs an understandable projection of the
original data in a reduced dimension space, so that clusters
could be directly perceived by inspection. However, the
effective cluster formation is not automatic, but requires a
further selection work by inspecting the activity bubbles in
the bi-dimensional grid. In order to overcome this
additional task, we use a uni-dimensional SOM (Fig. 3). If
the number of units is equal to the desired number of
clusters, absence of dead units must be checked in the final
solutions. Alternatively, it is necessary to run the SOM
with a larger number of units to compensate for the number
of dead units in the solution.


winning unit
input vector
neighbourhood area

The other type of metric merges the information on the
compactness of the load diagrams belonging to the same
class and (inversely) on the inter-distance among the class
representative load diagrams. It includes the Clustering
ispersion Indicator (CDI) [3],[5],[6]: D

( )
( )
( )

=
=
K
k
k
d
K
d
CDI
1
2

1
L
R
, (7)

and an Euclidean form of the Davies-Bouldin Index (DBI)
[20], representing the system-wide average of the similarity
measures of each cluster with its most similar cluster, for i,
j = 1, , K:
Fig. 3. Structure of the uni-dimensional SOM.
III. METRICS FOR ASSESSING CLUSTERING ADEQUACY

( )
( )
( )
( )
( ) ( )
( )

=


)

+
=
K
k
j i
j i
d
d d
K
DBI
1
,

max
1
j i
L L
r r
(8)
We consider any clustering algorithms forming K customer
classes, from which we build the corresponding K class
representative load diagrams by computing the weighted
average of the initial load diagrams, assuming as weights
the reference powers. In order to rank the clustering results
adequacy, we define the distance
( )
( )
k k
d L ,
) (
r between a
representative load diagram r
(k)
and the subset L
(k)
,
computed as the geometric mean of the Euclidean distances
between r
(k)
and ember of L each m
(k)
, and the infra-set
mean distance
( )
(
k
d L

) as the geometric mean of the inter-


distances between the members of L
(k)
. We introduce
various indicators, essentially belonging to two types of
metrics. One of these metrics is based on the Euclidean
distances among the representative load diagrams. The
corresponding indicators are the Mean Index Adequacy
(MIA) [3],[5],[12]:

A common characteristic of all these indicators is the fact
that lower values correspond to better adequacy. Since the
indicators in (5), (7) and (8) explicitly include the number
K of customer classes, a comparison among the clustering
results makes sense only when the number of customer
classes formed by the various algorithms is the same.
IV. APPLICATIONS TO ELECTRICITY CUSTOMER
CLUSTERING

( )
( )

=
=
K
k
k k
d
K
MIA
1
) ( 2
,
1
L r (5)
We consider a set of 234 non-residential customers
connected to the MV distribution system [12]. The
representative load diagram of each customer is obtained
by averaging the data measured with a 15-minute cadence
in a day with a given loading condition (spring weekdays).
Then, each representative load diagram contains 96 values.
Each clustering algorithm assigns the representative load
diagram of each customer to a specific cluster, providing a
complete and non-overlapping positioning of all customers.
and the Similarity Matrix Indicator (SMI), proposed by the
authors [6], defined as the maximum off-diagonal element
of the symmetrical similarity matrix, whose terms are built
by computing a logarithmic function of the Euclidean
distance between any pair of class representative load
diagrams, for i, j = 1, .., K:

( ) ( )
( ) | |

)

|
|
.
|

\
|
=

>
1
, ln
1
1 max
j i
j i
d
SMI
r r
(6)
Most of the clustering algorithms used require a
preventive assignment of the number of clusters to be
formed. The only exception is the modified follow-the-
leader algorithm, in which the number of clusters decreases
by increasing the distance threshold. As such, the distance
threshold has been adjusted during the analysis in order to
obtain the same numbers of clusters imposed to the other
algorithms. Results of different types of analysis are
presented in the sequel.

Fig. 4. Clustering results for the modified follow-the-leader algorithm. Horizontal axis: quarters of hour. Vertical axis: per unit power.


Fig. 5. Clustering results for various clustering algorithms. Horizontal axis: quarters of hour. Vertical axis: per unit power.
a) Hierarchical (average)
b) Hierarchical (Ward)
c) k-means
d) Fuzzy k-means
e) uni-dimensional SOM

A. Customer class formation for a specified number of
classes
We run the clustering algorithms with the number of
customer classes set to K = 16. Fig. 4 and Fig. 5 show the
results obtained from the modified follow-the-leader
algorithm (with distance threshold set to 2.266) and from
the other clustering algorithms, respectively. In each plot,
the red line represents the class representative load
diagram, while the blue lines show the load diagrams
falling into each customer class. The results show that two
algorithms the modified follow-the-leader and the
hierarchical clustering run with the average distance
linkage criterion are able to provide a highly detailed
separation of the clusters, easily isolating uncommon load
patterns.

B. Indicative evaluation on computational speed
We performed an indicative evaluation of the
computational speed for K = 16 customer classes. Even
though clustering analysis is normally performed off-line
and as such computational speed may not be a primary
issue, indications on computation times may be useful for
choosing the clustering algorithm to be used when the
number of customers is large. Computation time depends
on the PC usage, so that the duration indicated is the
average value obtained from several computations for the
same case. Table I presents a comparison among the
average computational speed obtained from a set of tests of
the algorithms. The Relative Computation Time Index
RCTI, defined for each algorithm as the ratio between its
computation time and the one of the k-means algorithm
(0.11 seconds on a Pentium III 733 MHz PC), is used to
easily represent the results. It can be noted that the k-
means, the modified follow-the-leader algorithm and the
uni-dimensional SOM exhibit a significantly fast
performance. Fig. 6 shows the bubbles of activity created
by the uni-dimensional SOM. Each unit corresponds to a
customer class, and the circle size is proportional to the
number of customers included into each class.


Fig. 6. Bubbles of activity created by the uni-dimensional SOM.
Fig. 7. Adequacy ranking with the MIA indicator Fig. 8. Adequacy ranking with the SMI indicator
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
10 12 14 16 18 20
number of customer classes
S
M
I
Follow-the-leader
k-means
Uni-dimensional SOM
Fuzzy k-means
Hierarchical (average)
Hierarchical (Ward)
0
0.05
0.1
0.15
0.2
10 11 12 13 14 15 16 17 18 19 20
number of customer classes
M
I
A
Follow-the-leader
k-means
Uni-dimensional SOM
Fuzzy k-means
Hierarchical (average)
Hierarchical (Ward)
0
0.2
0.4
0.6
0.8
1
10 12 14 16 18 20
number of customer classes
C
D
I
Follow-the-leader
k-means
Uni-dimensional SOM
Fuzzy k-means
Hierarchical (average)
Hierarchical (Ward)
0
0.5
1
1.5
2
2.5
3
10 12 14 16 18 20
number of customer classes
D
B
I
Follow-the-leader
k-means
Uni-dimensional SOM
Fuzzy k-means
Hierarchical (average)
Hierarchical (Ward)
TABLE I
RELATIVE COMPUTATION TIME INDEX
Clustering algorithm RCTI
k-means
1
Modified follow-the-leader
4
uni-dimensional SOM
7
Hierarchical (Ward)
25
Hierarchical (average)
53
Fuzzy k-means
270

Fig. 9. Adequacy ranking with the CDI indicator Fig. 10. Adequacy ranking with the DBI indicator


[8] B.D.Pitt and D.S.Kirschen, Application of Data Mining Techniques
to Load Profiling, Proc. IEEE PICA99, Santa Clara, CA, May 16-
21, 1999, pp.131-136.
C. Adequacy assessment of the clustering algorithms
The significant differences in computational speed could
suggest using the k-means algorithm. However, the most
important information for evaluating the effectiveness of
the clustering algorithm to be used is about clustering
adequacy. We performed repeated computations of the
clustering algorithms by varying the number of customer
classes imposed, and we computed the adequacy indicators
for all algorithms. The results are shown in Fig. 7 to Fig.
10. The analysis performed shows that the information
provided by the adequacy indicators is highly consistent. In
fact, the adequacy ranking (based on increasing values of
the indicators for the same number of customer classes) is
nearly similar for the same number of customer classes.
[9] D.Gerbec, S.Gasperic, I.Simon and F.Gubina, Hierarchic Clustering
Methods for Consumers Load Profile Determination, Proc. 2
nd

Balkan Power Conference, pp.9-15, Belgrade, Yugoslavia, 19-21
June 2002.
[10] A.Nazarko and Z.A.Styczynski, Application of Statistical and
Neural Approaches to the Daily Load Profile Modelling in Power
Distribution Systems, Proc. IEEE Transm. and Distrib. Conference,
New Orleans, LA, April 11-16, 1999, Vol.1, pp.320-325.
[11] R.Lamedica, L.Santolamazza, G.Fracassi, G.Martinelli and
A.Prudenzi, A Novel Methodology Based on Clustering Techniques
for Automatic Processing of MV Feeder Daily Load Patterns, Proc.
IEEE PES Summer Meeting 2000, Seattle, WA, July 16-20, 2000,
Vol.1, pp.96-101.
[12] G.Chicco, R.Napoli, F.Piglione, P.Postolache, M.Scutariu and
C.Toader, Options to Classify Electricity Customers, Proc.
MedPower 2002, Athens, Greece, November 4-6, 2002, paper
MED02-234.
V. CONCLUDING REMARKS
[13] A.P.Birch, C.S.zveren and A.T.Sapeluk, A generic load profile
technique using fuzzy classification, Proc. IEE Metering and Tariff
for Energy Supply Conference, 3-5 July 1996, pp. 203-207.
The results of adequacy assessment show that two
algorithms the modified follow-the-leader and the
hierarchical clustering run with the average distance
linkage criterion emerge as the most promising ones.
Both algorithms are able to provide a highly detailed
separation of the clusters, isolating load patterns with
uncommon behaviour and creating large groups containing
the remaining load diagrams. The other algorithms tend to
distribute the load diagrams among the groups formed. An
overall evaluation of the algorithms leads to consider the
modified follow-the-leader as the most efficient one, on the
basis of both clustering adequacy and computational speed.
[14] M.R.Anderberg, Cluster Analysis for Applications, Academic Press,
New York, 1973.
[15] J.H.Ward, Hierarchical grouping to optimise an objective function,
Journal of the American Statistical Association, 58 (1963) 236-244.
[16] J.T. Tou, R.C. Gonzalez, Pattern Recognition Principles, Addison-
Wesley, 1974.
[17] J.C.Bezdek, Pattern Recognition with Fuzzy Objective Function
Algorithms, Plenum Press, New York, 1981.
[18] Y.-H.Pao and D.J.Sobajic, Combined use of unsupervised and
supervised learning for dynamic security assessment, IEEE Trans.
on Power Systems 7, 2 (May 1992) 878-884.
[19] T.Kohonen, Self-Organisation and Associative Memory, 3
rd
Ed.,
Springer-Verlag, Berlin, 1989.
[20] D.L.Davies and D.W.Bouldin, A Cluster Separation Measure,
IEEE Trans. on Pattern Analysis and Machine Intelligence PAM-1,
2 (April 1979) 224-227.
The choice of the most convenient algorithm depends on
the operators needs. If a detailed customer classification is
needed, a promising strategy could be using the modified
follow-the-leader algorithm (or the hierarchical clustering
run with the average distance linkage criterion) to obtain a
first macro-classification, in which uncommon load
diagrams are isolated. Then, the study may be refined by
using the same algorithm on some macro-classes.

VII. BIOGRAPHIES
Gianfranco Chicco received his Ph.D. degree in
Electrotechnical Engineering in Italy in 1992. He
is Associate Professor of Distribution Systems at
the Politecnico di Torino, Italy. His research
activities include power systems and distribution
systems analysis, competitive electricity markets,
and power quality.
VI. REFERENCES
[1] S.V.Allera and A.G.Horsburgh, Load profiling for the energy
trading and settlements in the UK electricity markets, Proc.
DistribuTECH Europe DA/DSM Conference, London, 27-29 October
1998.
Roberto Napoli graduated in Electrotechnical
Engineering at the Politecnico di Torino, Italy, in
1969. He is Professor of Electric Power Systems at
the Politecnico di Torino and chairman of the
Italian Electric Power Systems National Research
Group. His research activities include power
system analysis, planning and control, artificial
intelligence applications, and competitive
electricity markets.
[2] P.Stephenson, I.Lungu, M.Paun, I.Silvas and G.Tupu, Tariff
Development for Consumer Groups in Internal European Electricity
Markets, Proc. CIRED 2001, Amsterdam, The Netherlands, June
18-21, 2001, paper 5.3.
[3] G.Chicco, R.Napoli, P.Postolache, M.Scutariu and C.Toader,
Customer Characterisation Options for Improving the Tariff Offer,
IEEE Trans. on Power Systems, 18, 1, February 2003, 381-387.
[4] C.S.Chen, M.S.Kang, J.C.Hwang and C.W.Huang, Synthesis of
power system load profiles by class load study, Electrical Power
and Energy Systems 22 (2000) 325-330.
Federico Piglione graduated in Electrotechnical
Engineering at the Politecnico di Torino, Italy, in
1977. He is Associate Professor of Industrial
Electrical Systems at the Politecnico di Torino,
Italy. His major research interests include power
system analysis, load forecasting, neural networks,
and artificial intelligence applications to power
systems.
[5] G.Chicco, R.Napoli, P.Postolache, M.Scutariu and C.Toader,
Electric Energy Customer Characterization for Developing
Dedicated Market Strategies, Proc. IEEE Porto PowerTech, Porto,
Portugal, September 10-13, 2001, paper POM5-378.
[6] G.Chicco, R.Napoli, F.Piglione, P.Postolache, M.Scutariu and
C.Toader, A Review of Concepts and Techniques for Emergent
Customer Categorisation, Proc. Telmark Discussion Forum,
London, September 2-4, 2002, paper 2_4.

[7] A.K.Jain, M.N.Murty and P.J.Flynn, Data Clustering: a Review,
ACM Computing Surveys 31, 3 (September 1999) 264-323.

Vous aimerez peut-être aussi