Académique Documents
Professionnel Documents
Culture Documents
Presented by
KRISHNAMOORTHI.BM. RAJKUMAR
Introduction Codebook
vectors LVQ1 algorithm Example for LVQ1 LVQ2 and OLVQ Application of LVQ Issues of LVQ Summary and Conclusion
By Krishnamoorthi and Rajkumar
2
An input vector x is picked at random from the input space. If the class labels of the input vector x and a Voronoi vector w agree, the Voronoi vector w is moved in the direction of the input vector x. If on the other hand, the class labels of the input vector x and the Voronoi vector w disagree, the Voronoi vector w is moved away from the input vector x.
Procedure LVQ1
procedure LVQ1 Let c = argmini || x mi || define the nearest mi to x denoted by mc. Let 0 < (t) < 1 and (t) be constant or decreasing monotonically with time. Let the mi(t) represent sequences of the mi in the discrete-time domain. while learning cycles < rlen loop for each input sample x(t) loop if i c then mc(t + 1) = mc(t). else if x and mc belong to the same class then mc(t + 1) = mc(t) + (t) [x(t) mc(t)]. end if if x and mc belong to the different class then mc(t + 1) = mc(t) - (t) [x(t) mc(t)]. end if end if End loop End loop End
criteria
Codebook vectors have stabilized or Maximum number of epochs has been reached.
LVQ establishes a number of codebook vectors into the input space to approximate various domains of the input vector by quantized values. The labeled samples are fed into a generalization module that finds a set of codebook vectors representing knowledge about the class membership of the training set.
By Krishnamoorthi and Rajkumar
9
Example
4 input vectors, 2 classes Step 1: assign target vectors to each input
Target vectors should be binary each one contains only zeros except for a single 1
Step 2: Choose how many sub-classes will make up each of the 2 classes e.g. 2 for each class
4 prototype vectors (competitive neurons) Note: typically the prototype vectors << input vectors
0 p1 = 1 t1=
1 p2 = 0
1 t2= 0
1 p3 = 0
1 t3= 1
0 p4 = 1
0 t4= 0
0 1
10
Example contd
W2 =
1100 0011
Step 3 W1 is initialized to small random values. The weight vectors belonging to the competitive neurons which define class 1 are marked with circles; class 2-squares
By Krishnamoorthi and Rajkumar
11
12
This is the correct class, therefore. The weight vector is moved toward the Input vector (learning rate = 0.5):
w1(1) =1w1(0) + 1
p1 1w1(0))
13
P1 Input presented
1 2W (0)
P3
1 1W (0)
1 4W (0)
1 3W (0)
P4
P2
14
Iteration 1 illustrated
P1
1W 1(1) 2W 1(0)
P3
1 1W (0)
3W
1(2)
4W
1(0)
3W
1(0)
P2
15
P1
1W 1(inf)
P3
3W 1(inf)
P4
4W 1(inf)
P2
2W 1(inf)
16
Stopping rule
Neural-network algorithms often overlearn, i.e., with extensive learning, the recognition accuracy is first improved, but may then very slowly start decreasing again. Various reasons account for this phenomenon; in the present case, especially with the limited number of training samples that are being recycled in learning, the codebook vectors become very specifically tuned to the training data, with the result that their ability to generalize for other data values suffer from that.
By Krishnamoorthi and Rajkumar
17
18
LVQ 2
A second, improved, LVQ algorithm known as LVQ2 is sometimes preferred because it comes closer in effect to Bayesian decision theory.
19
LVQ2
LVQ2 is applied only after LVQ1 has been applied using a small learning rate and small number of iterations It optimizes the relative distance of the codebook vectors from the class borders the results are typically more robust
20
LVQ2 - Continued
The same weight/vector update equations are used as in the standard LVQ, but they only get applied under certain conditions, namely when:
The input vector x is incorrectly classified by the associated Voronoi vector wI(x). The next nearest Voronoi vector wS(x) does give the correct classification, and The input vector x is sufficiently close to the decision boundary (perpendicular bisector plane) between wI(x) and wS(x).
By Krishnamoorthi and Rajkumar
21
LVQ2 - Continued
In
this case, both vectors wI(x) and wS(x) are updated (using the incorrect and correct classification update equations respectively). Various other variations on this theme exist (LVQ3, etc.), and this is still a fruitful research area for building better classification systems.
By Krishnamoorthi and Rajkumar
22
mc(t + 1) = [(1-s(t)
c(t)]mc(t)
+ s(t) c(t)x(t)
Where s(t) = +1 if the classification is correct and -1 if it is wrong. We first directly see that mc(t) is statistically independent of x(t). It may also be obvious that the statistical accuracy of the learned codebook vector values is optimal if the effects of the corrections made at different times.
23
Notice that mc(t+1) contains a trace from x(t) through the last term in the equation given previously and traces from the earlier x(t), t = 1,2,t-1 through mc(t). The (absolute) magnitude of the last trace from x(t) is scaled down by the factor c(t), and, for instance, the trace from x(t-1) is scaled down by [1-s(t) c(t)]. c(t-1). Now first we stipulate that these two scalings must be identical: c(t) = [ 1 s(t) c(t)] c(t-1)
By Krishnamoorthi and Rajkumar
24
Now first we stipulate that these two scalings must be identical: c(t) = [ 1 s(t) c(t)] c(t-1) Thus optimal values of i(t) are determined by the recursion
c(t)
c(t
1) / ( 1 + s(t)
c(t-1)
25
LVQ - Issues
Random
Dead neurons too far away from the training examples, never take the competition
It is set proportional to the number of input vectors in each class Depends on the complexity of the data, learning rate
26
analysis and OCR Speech analysis and recognition Signal processing and radar Industrial and real world measurements and robotics Mathematical problems
27
One Application
28
29
30
31
LVQ1 is applied for 1000 iterations. And OLVQ was run for 100 iterations. Accuracy rate is 78%
32
Classifies input vectors using competitive layer to find subclasses of input vectors that are then combined into classes Can classify any set of input vector (linearly separable and non-linearly separable, convex regions and non-convex regions) if
there are enough neurons in the competitive layer each class is assigned enough sub-classes (i.e. enough competitive neurons)
33
References
Neural
Networks Haykin. LVQ-PAK: A program package for the correct application of Learning vector quantization algorithms Teuvo Kohonen, Jari Kangas, Jorma Laaksomen, and Kari Torkkola.
34