Vous êtes sur la page 1sur 36

Unsupervised Learning: Part II Self-Organizing Maps

Klinkhachorn:CpE520

Classication of ANN Paradigms

Unsupervised

Klinkhachorn:CpE520

Self -Organizing Maps


Presented by Kohonen in 1988 Two layer Network Input layer
Competitive layer

Develops a topological map of relationships between the input patterns


Orderly progression of inputs produce and orderly progression of outputs
Klinkhachorn:CpE520

Teuvo Kohonen
Dr. Eng., Emeritus Professor of the Academy of Finland; Academician His research areas are the theory of self-organization, associative memories, neural networks, and pattern recognition, in which he has published over 300 research papers and four monography books. His fth book is on digital computers. His more recent work is expounded in the third, extended edition (2001) of his book Self-Organizing Maps.

Since the 1960s, Professor Kohonen has introduced several new concepts to neural computing: fundamental theories of distributed associative memory and optimal associative mappings, the learning subspace method, the selforganizing feature maps (SOMs), the learning vector quantization (LVQ), novel algorithms for symbol processing like the redundant hash addressing, dynamically expanding context and a special SOM for symbolic data, and a SOM called the Adaptive-Subspace SOM (ASSOM) in which invariant-feature lters emergence. A new SOM architecture WEBSOM has been developed in his laboratory for exploratory textual data mining. In the largest WEBSOM implemented so far, about seven million documents have been organized in a one-million neuron network: for smaller WEBSOMs, see the demo at http://websom.hut./websom/.

http://www.cis.hut./research/som-research/teuvo.html

Klinkhachorn:CpE520

Self -Organizing Maps

Klinkhachorn:CpE520

SOM - Training algorithm


Determine winner in competition by either of two methods (both determine the weight vector best matching the input vector) 1. Calculate difference between weight and input vector for each neuron
Vector difference = D j = Winner = Dc = min (Dj)

( x - w )
i ji i

2. Normalize inputs (as presented to network) and weight vectors (as updated) and calculate dot product
Dot Pr oduct = D j =
Winner = Dc = min (Dj)
i

(x * w )
i ji
Klinkhachorn:CpE520

Vector Normalization

Klinkhachorn:CpE520

Example - Preserve Direction


Divide each element of vector by the magnitude of the vector If inputs = 6 and 8 Magnitude =

(6

+ 82 = 10

Normalized inputs = 0.6 and 0.8 If inputs = 3 and 4 Magnitude =

(3

+ 42 = 5

Normalized inputs = 0.6 and 0.8


Klinkhachorn:CpE520

Example - Preserve relationship between vectors


Add another element which places the vector on the surface of a sphere and then divide by the magnitude of the vector If inputs = 6 and 8 , N = 15 d=

( 15

- 10 2 = 11.18

Normalized inputs = 0.4, 0.53, .75 If inputs = 3 and 4 d=

( 15

- 52 = 14.14

Normalized inputs = 0.2, 0.27, 0.94


Klinkhachorn:CpE520

SOM - Training algorithm


Update weights of winner (Dc) and weights of all others in the neighborhood of winner.
Wji(t+1) = Wji(t) + a(xi- Wji(t)) for all neurons in neighborhood of winner

Choice of neighborhoods (size and dimensionality) affect network performance and function. Neighborhood and gain should be decreases as time progresses
Klinkhachorn:CpE520

SOM - Neighborhoods

Klinkhachorn:CpE520

SOM Parameter Adjustments


Gain parameter, a, should decrease as time progresses a = A*(1-t/T) where, A = initial gain

T = final interation time

Neighborhood adjustment, d, should also decrease as time progresses d = D*(1-t/T) where, D = initial neighborhood T = final interation time
Klinkhachorn:CpE520

SOM Training Summary


Locate winner of Competitive Layer Adapt weights of all units in neighborhood to increase similarity between input and weight vectors

Decrease gain and neighborhood as time progresses

Klinkhachorn:CpE520

SOM Example 1
The weight vector distribution will approximate the probability density distribution of the input space

Klinkhachorn:CpE520

SOM Weight Maps


Iteration: 100 Iteration: 1200

Klinkhachorn:CpE520

SOM Example 2
Input Distribution Weight Map after 1700 iterations

Klinkhachorn:CpE520

SOM 1-D Neighborhood

Klinkhachorn:CpE520

SOM Example 3
The weight vector distribution will approximate the probability density distribution of the input space

Klinkhachorn:CpE520

SOM Weight Map


Iteration: 100 Iteration: 3400

Klinkhachorn:CpE520

SOM Characteristics
Dimension Mapping (the dimensions of the input space are mapped to the dimensions of the output space) Variable discretion among input patterns due to frequency of occurance

Weight distribution tends to approximate the probability density of the input vectors
Klinkhachorn:CpE520

SOM Example 4
Input distribution

Klinkhachorn:CpE520

SOM Example 4
Initial Weight Map

Klinkhachorn:CpE520

SOM Example 4

Klinkhachorn:CpE520

SOM Example 4

Klinkhachorn:CpE520

SOM Example 4

Klinkhachorn:CpE520

SOM Example 4

Klinkhachorn:CpE520

SOM Example 5 Traveling Salesman Problem


Initial Position

Klinkhachorn:CpE520

SOM Example 5 Traveling Salesman Problem

Klinkhachorn:CpE520

SOM Example 5 Traveling Salesman Problem

Klinkhachorn:CpE520

SOM Example 5 Color Reduction Map


24 bits RGB

Klinkhachorn:CpE520

SOM Example 5 Color Reduction Map


256 color Indexed

Klinkhachorn:CpE520

SOM Example 5 Color Reduction Map

Klinkhachorn:CpE520

SOM Example 5 Color Reduction Map

Typical color reduction map


Klinkhachorn:CpE520

SOM Example 5 Color Reduction Map

Statistical clustering method


Klinkhachorn:CpE520

SOM Example 5 Color Reduction Map

Klinkhachorn:CpE520

SOM Example 5 Color Reduction Map

Kohonen neural Network


Klinkhachorn:CpE520