Vous êtes sur la page 1sur 36

# Unsupervised Learning: Part II Self-Organizing Maps

Unsupervised

## Self -Organizing Maps

Presented by Kohonen in 1988 Two layer Network Input layer
Competitive layer

## Develops a topological map of relationships between the input patterns

Orderly progression of inputs produce and orderly progression of outputs

Teuvo Kohonen
Dr. Eng., Emeritus Professor of the Academy of Finland; Academician His research areas are the theory of self-organization, associative memories, neural networks, and pattern recognition, in which he has published over 300 research papers and four monography books. His fth book is on digital computers. His more recent work is expounded in the third, extended edition (2001) of his book Self-Organizing Maps.

Since the 1960s, Professor Kohonen has introduced several new concepts to neural computing: fundamental theories of distributed associative memory and optimal associative mappings, the learning subspace method, the selforganizing feature maps (SOMs), the learning vector quantization (LVQ), novel algorithms for symbol processing like the redundant hash addressing, dynamically expanding context and a special SOM for symbolic data, and a SOM called the Adaptive-Subspace SOM (ASSOM) in which invariant-feature lters emergence. A new SOM architecture WEBSOM has been developed in his laboratory for exploratory textual data mining. In the largest WEBSOM implemented so far, about seven million documents have been organized in a one-million neuron network: for smaller WEBSOMs, see the demo at http://websom.hut./websom/.

http://www.cis.hut./research/som-research/teuvo.html

## SOM - Training algorithm

Determine winner in competition by either of two methods (both determine the weight vector best matching the input vector) 1. Calculate difference between weight and input vector for each neuron
Vector difference = D j = Winner = Dc = min (Dj)

( x - w )
i ji i

2. Normalize inputs (as presented to network) and weight vectors (as updated) and calculate dot product
Dot Pr oduct = D j =
Winner = Dc = min (Dj)
i

(x * w )
i ji

Vector Normalization

## Example - Preserve Direction

Divide each element of vector by the magnitude of the vector If inputs = 6 and 8 Magnitude =

(6

+ 82 = 10

(3

+ 42 = 5

## Example - Preserve relationship between vectors

Add another element which places the vector on the surface of a sphere and then divide by the magnitude of the vector If inputs = 6 and 8 , N = 15 d=

( 15

- 10 2 = 11.18

( 15

- 52 = 14.14

## SOM - Training algorithm

Update weights of winner (Dc) and weights of all others in the neighborhood of winner.
Wji(t+1) = Wji(t) + a(xi- Wji(t)) for all neurons in neighborhood of winner

Choice of neighborhoods (size and dimensionality) affect network performance and function. Neighborhood and gain should be decreases as time progresses

SOM - Neighborhoods

Gain parameter, a, should decrease as time progresses a = A*(1-t/T) where, A = initial gain

## T = final interation time

Neighborhood adjustment, d, should also decrease as time progresses d = D*(1-t/T) where, D = initial neighborhood T = final interation time

## SOM Training Summary

Locate winner of Competitive Layer Adapt weights of all units in neighborhood to increase similarity between input and weight vectors

## Decrease gain and neighborhood as time progresses

SOM Example 1
The weight vector distribution will approximate the probability density distribution of the input space

## SOM Weight Maps

Iteration: 100 Iteration: 1200

SOM Example 2
Input Distribution Weight Map after 1700 iterations

## SOM 1-D Neighborhood

SOM Example 3
The weight vector distribution will approximate the probability density distribution of the input space

## SOM Weight Map

Iteration: 100 Iteration: 3400

SOM Characteristics
Dimension Mapping (the dimensions of the input space are mapped to the dimensions of the output space) Variable discretion among input patterns due to frequency of occurance

Weight distribution tends to approximate the probability density of the input vectors

SOM Example 4
Input distribution

SOM Example 4
Initial Weight Map

SOM Example 4

SOM Example 4

SOM Example 4

SOM Example 4

Initial Position

24 bits RGB

## SOM Example 5 Color Reduction Map

256 color Indexed