Académique Documents
Professionnel Documents
Culture Documents
Lecture 7:
Neural Networks Based on
Competition.
Lecturer: A/Prof. M. Bennamoun
1
Maxnet
Maxnet
Mexican
MexicanHat
Hat
Hamming
HammingNet
Net
Kohonen
KohonenSelf-Organizing
Self-OrganizingMaps
Maps
(SOM)
(SOM)
SOM
SOMin
inMatlab
Matlab
References
Referencesand
andsuggested
suggested
reading
reading
Fausett
IntroductionCompetitive nets
Fausett
IntroductionCompetitive nets
Fausett
IntroductionCompetitive nets
The weight update for output (or cluster) unit j is given as:
x + (1 )w. j (old)
where x is the input vector, w.j is the weight vector for unit j,
and the learning rate, decreases as learning proceeds.
Fausett
IntroductionCompetitive nets
and the weight vector and chooses the unit whose
weight vector has the smallest Euclidean distance
from the I/P vector.
The second method uses the dot product of the I/P
vector and the weight vector.
The dot product can be interpreted as giving the
correlation between the I/P and weight vector.
IntroductionCompetitive nets
1. Euclidean distance between two vectors
x xi =
( x xi ) ( x xi )
t
1 < T < 2
Maxnet
Maxnet
Mexican
MexicanHat
Hat
Hamming
HammingNet
Net
Kohonen
KohonenSelf-Organizing
Self-OrganizingMaps
Maps
(SOM)
(SOM)
SOM
SOMin
inMatlab
Matlab
References
Referencesand
andsuggested
suggested
reading
reading
MAXNET
0,
f (net ) =
net ,
WM = .
net < 0
net 0
.
1 .
.
.
0< < 1
y k +1 = W M y k
]
10
MAXNET
A recurrent network involving both excitatory
and inhibitory connections
Positive self-feedbacks and negative
cross-feedbacks
After a number of recurrences, the only nonzero node will be the one with the largest
initializing entry from i/p vector
11
Fausett
Maxnet
if x > 0
otherwise
Competition:
Step 0:
< 1 / m, )
12
Algorithm
a j ( 0 ) = input to node A j
1
weights : w ij =
if i = j
otherwise
for
j = 1,..., m
13
Example
a3(5) = .0
a4(5) = .421
14
Maxnet
Maxnet
Mexican
MexicanHat
Hat
Hamming
HammingNet
Net
Kohonen
KohonenSelf-Organizing
Self-OrganizingMaps
Maps
(SOM)
(SOM)
SOM
SOMin
inMatlab
Matlab
References
Referencesand
andsuggested
suggested
reading
reading
15
Mexican Hat
Architecture
Mexican Hat
w2 = inhibitory connection
si = external activation
Mexican Hat
x i (t) = f s i (t) + w k x i + k (t 1 )
k
inputs
18
Mexican Hat
19
Mexican Hat
C1 > 0
wij =
C2 < 0
for k = 0, ... R1
for k = R1 + 1, ... R2
Initialize x_old to 0
Step 1
x=s
Save activation in array x_old (for i = 1, ..., n) :
x _ old i = xi
Set iteration counter: t = 1
20
Mexican Hat
xi = C1
Step 4
R1
k = R1
x _ old i + k + C2
R1 1
x _ old
k = R2
i+k
+ C2
R2
x _ old
k = R1 +1
i+k
if x < 0
0
f ( x) = x
if 0 x max
x _ max if x > max
Mexican Hat
Step 5
x _ old i = xi
Step 6
i = 1,..., n
t = t +1
Step 7
Note:
The +ve reinforcement from nearby units and ve reinforcement
from units that are further away have the effect of
increasing the activation of units with larger initial activations,
reducing the activations of those that had a smaller external
signal.
22
Mexican Hat
Example
f ( x) = x
if 0 x 2
2 if x > 2
Step 0
Step 1
t=0 The external signal s = (.0, .5, .8, 1, .8, .5, .0)
x(0) = (0.0, 0.5, 0.8, 1.0, 0.8, 0.5, 0.0) The node
to be
Save in x_old :
enhanced
x _ old(1) = (0.0, 0.5, 0.8, 1.0, 0.8, 0.5, 0.0)
Step 2
Mexican Hat
Example
t=1
24
Mexican Hat
Step 4
Example
t=2
25
Maxnet
Maxnet
Mexican
MexicanHat
Hat
Hamming
HammingNet
Net
Kohonen
KohonenSelf-Organizing
Self-OrganizingMaps
Maps
(SOM)
(SOM)
SOM
SOMin
inMatlab
Matlab
References
Referencesand
andsuggested
suggested
reading
reading
26
Hamming Network
1
0
1
and
1
0
0
xy = a d
Hamming Network
Architecture
The sample architecture shown assumes input vectors are 4tuples, to be categorized as belonging to one of 2 classes.
28
Hamming Network
Algorithm
Given a set of m bipolar exemplar vectors e(1), e(2), , e(m), the
Hamming net can be used to find the exemplar that is closest to the
bipolar i/p vector x.
The net i/p y_inj to unit Yj gives the number of components in which
the i/p vector and the exemplar vector for unit Yje(j) agree (n minus
the Hamming distance between these vectors).
Nomenclature:
n
e(j)
e( j ) = (e1 ( j ), L ei ( j ), L , en ( j ) )
29
Hamming Network
Algorithm
Step 0 To store the m exemplar vectors, initialise the weights and
biases:
y _ in j = b j + xi wij
i
for j = 1, L , m
y j (0) = y _ in j
for j = 1, L , m
Hamming Network
31
Hamming Network
e(1) = [1, - 1, - 1, - 1]
e(2) = [ 1, - 1, - 1, - 1]
The Hamming net can be used to find the exemplar that is closest to each of the
bipolar i/p patterns, (1, 1, -1, -1), (1, -1, -1, -1), (-1, -1, -1, 1), and (-1, -1, 1, 1)
Step 1: Store the m exemplar vectors in the weights:
.5
.5
.5
.5
Initialize the biases :
b1 = b2 = 2 = n / 2
.5
.5
W =
.5
.5
ei ( j )
where wij =
2
32
Hamming Network
Example
For the vector x=(1,1, -1, -1), lets do the following steps:
y _ in1 = b1 + xi wi1 = 2 + 1 = 3
i
y _ in2 = b2 + xi wi 2 = 2 1 = 1
i
Since y1(0) > y2(0), Maxnet will find that unit Y1 has the
best match exemplar for input vector x = (1, 1, -1, -1).
33
Hamming Network
Example
For the vector x=(1,-1, -1, -1), lets do the following steps:
y _ in1 = b1 + xi wi1 = 2 + 2 = 4
i
y _ in2 = b2 + xi wi 2 = 2 + 0 = 2
i
Since y1(0) > y2(0), Maxnet will find that unit Y1 has the
best match exemplar for input vector x = (1, -1, -1, -1).
34
Hamming Network
Example
For the vector x=(-1,-1, -1, 1), lets do the following steps:
y _ in1 = b1 + xi wi1 = 2 + 0 = 2
i
y _ in2 = b2 + xi wi 2 = 2 + 2 = 4
i
Note that the input agrees with e(1) in the 2nd and 3rd
components and agrees with e(2) in all four components
y1 (0) = 2
y2 (0) = 4
Since y2(0) > y1(0), Maxnet will find that unit Y2 has the
best match exemplar for input vector x = (-1, -1, -1, 1).
35
Hamming Network
Example
For the vector x=(-1,-1, 1, 1), lets do the following steps:
y _ in1 = b1 + xi wi1 = 2 1 = 1
i
y _ in2 = b2 + xi wi 2 = 2 + 1 = 3
i
Note that the input agrees with e(1) in the 2nd component
and agrees with e(2) in the 1st, 2nd, and 4th components
y1 (0) = 1
y 2 ( 0) = 3
Since y2(0) > y1(0), Maxnet will find that unit Y2 has the
best match exemplar for input vector x = (-1, -1, 1, 1).
36
Maxnet
Maxnet
Mexican
MexicanHat
Hat
Hamming
HammingNet
Net
Kohonen
KohonenSelf-Organizing
Self-OrganizingMaps
Maps
(SOM)
(SOM)
SOM
SOMin
inMatlab
Matlab
References
Referencesand
andsuggested
suggested
reading
reading
37
Fausett
(SOM)
Fausett
(SOM)
39
Fausett
(SOM)
Architecture:
40
Fausett
(SOM)
Architecture:
*
{*
(*
[#]
{ } R=2
*)
*}
( ) R=1
[ ] R=0
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
#
*
*
*
*
*
*
*
*
*
*
*
*
* * * * * * *
* * * * * * *
* * * * * * *
R=2
R=1
R=0
41
Fausett
(SOM)
Architecture:
*
*
*
*
*
*
*
*
*
*
R=2
R=1
R=0
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
43
Fausett
(SOM)
Algorithm:
Step 0.
Step 1
D( J ) = ( wij xi ) 2
i
Step 4
Step 5
Step 6
Step 7
Step 8
Fausett
(SOM)
Fausett
(SOM)
s(2) = (0, 0, 0, 1)
s(3) = (1, 0, 0, 0)
s(4) = (0, 0, 1, 1)
Fausett
(SOM)
Examples:
Step 0 Initial weight matrix
.2
.6
W =
.5
.9
.8
.4
.7
.3
Step 3
Fausett
(SOM)
Examples:
Step 4
Step 5
.2
.6
W =
.5
.9
.92
.76
.28
.12
48
Fausett
(SOM)
Examples:
Step 2
Step 3
Step 5
.08
.24
W =
.20
.96
.92
.76
.28
.12
49
Fausett
(SOM)
Examples:
Step 2
Step 3
Step 5
.08
.24
W =
.20
.96
.968
.304
.112
.048
50
Fausett
(SOM)
Examples:
Step 2
Step 3
Step 5
.032
.096
W =
.680
.984
.968
.304
.112
.048
51
Fausett
(SOM)
Examples:
.016
.047
W =
.630
.999
.980
.360
.055
.024
Modifying the learning rate and after 100 iterations (epochs), the weight
matrix appears to be converging to:
.0 1.0
.0 0.5
W =
.5 0.0
1
.
0
0
.
0
52
53
54
Maxnet
Maxnet
Mexican
MexicanHat
Hat
Hamming
HammingNet
Net
Kohonen
KohonenSelf-Organizing
Self-OrganizingMaps
Maps
(SOM)
(SOM)
SOM
SOMin
inMatlab
Matlab
References
Referencesand
andsuggested
suggested
reading
reading
56
57
Architecture
58
Creating a SOM
59
Maxnet
Maxnet
Mexican
MexicanHat
Hat
Hamming
HammingNet
Net
Kohonen
KohonenSelf-Organizing
Self-OrganizingMaps
Maps
(SOM)
(SOM)
SOM
SOMin
inMatlab
Matlab
References
Referencesand
andsuggested
suggested
reading
reading
60
Suggested Reading.
L. Fausett,
Fundamentals of
Neural Networks,
Prentice-Hall,
1994, Chapter 4.
J.
Zurada,
Artificial Neural
systems, West
Publishing, 1992,
Chapter 7.
61
References:
These lecture notes were based on the references of the
previous slide, and the following references
1. Berlin Chen Lecture notes: Normal University, Taipei,
Taiwan, ROC. http://140.122.185.120
62