Vous êtes sur la page 1sur 42

w

ty

,w

ab

or

.m

www.myreaders.info/ , RC Chakraborty, e-mail rcchak@gmail.com , Aug. 10, 2010


http://www.myreaders.info/html/soft_computing.html
www.myreaders.info

ha

kr

yr

Associative Memory : Soft Computing Course Lecture 21 24, notes, slides

ea

de

rs

.in

fo

Return to Website

Associative Memory
Soft Computing
Associative Memory (AM), topics : Description, content terms - encoding address-

ability, working, classes of AM : auto and hetero, AM related or memorization, retrieval or recollection, errors and noise, performance measure - memory capacity and contentaddressability. Associative memory models : network architectures linear associator, Hopfield model and bi-directional model (BAM). Auto-associative memory (auto-correlators) : how to store patterns ? how to retrieve patterns? recognition of noisy patterns. Bi-directional hetero-associative memory (hetero-correlators) : BAM operations retrieve the nearest pair, addition and deletion of pattern pairs, energy function for BAM - working of Kosko's BAM, incorrect recall of pattern, multiple training encoding strategy augmentation matrix, generalized correlation matrix and algorithm.

ea

de

rs

.in

fo

,w

.m

Associative Memory
Soft Computing
Topics
(Lectures 21, 22, 23, 24 4 hours) Slides 03-12

ha

kr

ab

or

ty

1. Associative Memory (AM) Description

yr

Content addressability; Working of AM; AM Classes : auto and hetero; AM related terms - encoding or memorization, retrieval or recollection, errors and noise; Performance measure - memory capacity and content-addressability.
2. Associative Memory Models
13-20

AM Classes auto and hetero; AM Models;

Network architectures -

Linear associator, Hopfield model and Bi-directional model (BAM).


3. Auto-associative Memory (auto-correlators)
21-24

How to store patterns? How to retrieve patterns? Recognition of noisy patterns.


4. Bi-directional Hetero-associative Memory (hetero-correlators)
25-41

BAM operations - retrieve the nearest pair, Addition and deletion of pattern pairs; Energy function for BAM - working of Kosko's augmentation matrix, generalized correlation matrix and algorithm .
5. References
02

BAM,

incorrect recall of pattern; Multiple training encoding strategy

42

yr

ea

de

rs

.in

fo

or

ty

ha

kr

ab

,w

What is Associative Memory ?

Associative Memory

w C

.m

An associative memory is a content-addressable structure that maps a


set of input patterns to a set of output patterns.

A content-addressable structure is a type of memory that allows the


recall of data based on the degree of similarity between the input pattern and the patterns stored in memory.

There are two types of associative


hetero-associative.

memory :

auto-associative

and

An auto-associative memory retrieves a previously stored pattern


that most closely resembles the current pattern.

In a hetero-associative memory, the retrieved pattern is in general,


different from the input pattern not only in content but possibly also in type and format.

Neural networks are used


03

to

implement

these

associative

memory

models called NAM (Neural associative memory).

de

rs

.in

fo

or

ty

,w

.m

1. Associative Memory An associative memory is a content-addressable structure that maps a set of input patterns to a set of output patterns. structure refers to a memory organization where the by its content as opposed to an explicit address and hetero-associative.
An auto-associative memory
yr

ea

SC - AM description

kr

ab

A content-addressable memory is accessed

ha

in the traditional computer

memory system. The associative memory are of two types : auto-associative

retrieves a previously stored pattern that

most closely resembles the current pattern.


In hetero-associative memory, the retrieved pattern is in general different

from the input pattern not only in content but possibly also in type and format.
04

de

rs

.in

fo

.m

1.1 Description of Associative Memory An associative memory is a content-addressable structure that allows,

or

ty

,w

yr

ea

SC - AM description

ha

kr

the recall of data, based on the degree of similarity between the input pattern and the patterns stored in memory.

ab

Example : Associative Memory


The figure below shows a memory containing names of several people. If the given memory is content-addressable, Then using the erroneous string "Crhistpher Columbos" as key is sufficient to retrieve the correct name "Christopher Colombus." In this sense, this type of memory is robust and fault-tolerant, because this type of memory exhibits some form of error-correction capability.

Alex Graham Bell Thomas Edison Christopher Columbus Albert Einstein Crhistpher Columbos Charles Darwin Blaise Pascal Marco Polo Neil Armstrong Sigmund Freud Christopher Columbus

Fig.

A content-addressable memory, Input and Output

Note : An associative memory is accessed by its content, opposed to an explicit address in the traditional computer memory system. The memory allows the recall of information based on partial knowledge of its contents.
[Continued in next slide]
05

de

rs

.in

fo

ea

SC - AM description [Continued from previous slide]

.m

yr ty ,w w

Associative memory is a system that associates two patterns (X, Y)

kr

ab

such that when one is encountered, the other can be recalled. The associative memory are of two types : auto-associative memory and hetero-associative memory.

ha

or

Auto-associative memory

Consider, y[1], y[2], pattern vectors and

y[3], . . . . . y[M],

be the number of stored

let y(m) be the components of these vectors, when inputting a noisy or

representing features extracted from the patterns. The auto-associative memory will output a pattern vector y(m) incomplete version of y(m).
Hetero-associative memory

Here the memory function is more general. Consider, we number of key-response pairs {c(M), y(M)}.

have

{c(1), y(1)}, {c(2), y(2)}, . . . . . . ,

The hetero-associative memory will output a pattern

vector y(m) if a noisy or incomplete verson of the c(m) is given.


Neural networks are used to implement associative memory models.

The well-known neural associative memory models are :

Linear

associater

is and

the

simplest

artificial

neural

associative

memory.
Hopfield model Bidirectional Associative Memory (BAM)

are the other popular ANN models used as associative memories. These
06

models

follow

different

neural

network

architectures

to

memorize information.

de

rs

.in

fo

.m

1.2 Working of Associative Memory

ab

or

ty

,w

yr

Example
An associative memory is a storehouse of associated patterns which are encoded in some form.
When the storehouse is triggered or excited with a pattern, then

ha

kr

ea

SC - AM description

the associated pattern pair is recalled or appears at the output.


The input could be an exact or distorted or partial representation of

a stored pattern. Fig below illustrates the working of an associated memory. The associated pattern pairs

Input Pattern

( , ), ( , +), (7 , 4).
Recalled Pattern The association is represented

by the symbol The associated pattern pairs

are stored the memory. .

Fig. Working of an associated memory

When the memory is triggered with an input pattern say then the associated pattern is retrieved automatically.
07

de

rs

.in

fo

.m

1.3 Associative Memory - Classes As stated before, there are two classes of associative memory:
auto-associative and hetero-associative memory.

ha

kr

ab

or

ty

,w

yr

ea

SC - AM description

An

auto-associative memory, also known as

auto-associative correlator, that most closely

is used to retrieve a previously stored pattern resembles the current pattern;

A hetero-associative memory, also known as hetero-associative correlator, is used to retrieve pattern in general, different from the input pattern not only in content but possibly also different in type and format. Examples

Input pattern presented

Recall of associated pattern

Presented distorted pattern

Recall of perfect pattern

Hetero-associative memory

Auto-associative memory

Fig. Hetero and Auto Associative memory Correlators


08

de

rs

.in

fo

.m

1.4 Related Terms


yr

ea

SC - AM description

or

ty

Here explained : Encoding or memorization, Retrieval or recollection, Errors and Noise, Memory capacity and Content-addressability.

ha

kr

ab

,w C

Encoding or memorization
Building an associative memory means, constructing a connection weight matrix W such that when an input pattern is presented, and

the stored pattern associated with the input pattern is retrieved. This process of constructing the connection weight matrix is called
encoding.

During encoding, for an associated pattern pair (Xk, Yk) , , where

the weight values of the correlation matrix Wk are computed as (wij)k = (xi)k (yj)k (xi)k (yj)k for represents the i th component of pattern Xk , and represents the j th component of pattern Yk i = 1, 2, . . . , m and j = 1, 2, . . . , n.
W

Constructing of the connection weight matrix

is accomplished by

summing up the individual correlation matrices Wk , i.e.,


W=

is the proportionality or normalizing constant.


09

k=1

Wk

where

de

rs

.in

fo

.m

Retrieval or recollection
After memorization, the process of retrieving a stored pattern, given an input pattern, is called decoding. Given an input pattern X, the decoding or recollection is accomplished by: first compute
input j =

ha

kr

ab

or

ty

,w

yr

ea

SC - AM description

the net input to the output units

using

j=1

xi w i j

where input j is weighted sum of the input or activation value of node j , for j = 1, 2, ..., n. then determine the units output using a bipolar output function:
Yj = +1 if input j j -1 other wise

where
10

is the threshold value of output neuron j .

de

rs

.in

fo

.m

Errors and noise


The input pattern may contain errors and noise, or may be an incomplete version of some previously encoded pattern. When a corrupted input pattern is presented, the network will

ha

kr

ab

or

ty

,w

yr

ea

SC - AM description

retrieve the stored pattern that is closest to actual input pattern. The presence of noise or errors results only in a mere decrease rather than total degradation in the performance of the network. are robust and fault tolerant because and

Thus, associative memories of many processing distributed computations.


11

elements

performing

highly

parallel

de

rs

.in

fo

.m

Performance Measures
The of memory associative capacity memory and content-addressability for correct are the measures two performance retrieval. These

kr

ab

or

ty

,w

yr

ea

SC - AM description

performance measures are related to each other.


Memory capacity

ha

refers

to

the

maximum

number

of

associated

pattern pairs that can be stored and correctly retrieved.


Content-addressability

is

the ability of the network

to

retrieve

the

correct stored pattern. If input patterns are mutually orthogonal - perfect retrieval is possible. If the stored input patterns are not mutually orthogonal - non-perfect retrieval can happen due to crosstalk among the patterns.
12

de

rs

.in

fo

or

ty

,w

.m

2. Associative Memory Models An associative memory is a system which stores mappings of specific input representations to specific output representations.
yr

ha

kr

ab

An associative memory "associates" two patterns such that when one is

Most

ea

SC - AM models

encountered, the other can be reliably recalled. associative memory implementations are realized as connectionist

networks. The simplest associative memory model is Linear associator, which is a feed-forward type of network. It has very low memory capacity and therefore not much used. The popular models are
Memory (BAM) model. Hopfield Model

and Bi-directional Associative

The Network Architecture of these models are presented in this section.


13

de

rs

.in

fo

.m

2.1 Associative Memory Models The simplest and among the first studied associative memory models is Linear associator. It is a feed-forward type of network where output is produced in a single feed-forward computation. It used as an auto-associator as well as a hetero-associator, but the it can be

ha

kr

ab

or

ty

,w

yr

ea

SC - AM models

possesses a very low memory capacity and therefore not much used. The popular associative memory models are
Hopfield Model

and

Bi-directional Associative Memory (BAM) model.


The

Hopfield model is an auto-associative memory, proposed by

John Hopfield in 1982. It is an ensemble of simple processing units that have a fairly complex collective computational abilities and behavior. time until The Hopfield model computes its output recursively in the system becomes stable. Hopfield networks are

designed using bipolar units and a learning procedure.


The Bi-directional associative memory (BAM) model

is

similar to and

linear associator, therefore the layers. The

but BAM

the model

connections can

are

bi-directional

allows forward and backward flow of information between perform both auto-associative

and hetero-associative recall of stored information. The network architecture the next few slides.
14

of

these

three models

are

described

in

de

rs

.in

fo

.m

2.2 Network Architectures of AM Models The neural associative memory models architectures to memorize follow different neural network

or

ty

,w

yr

ea

SC - AM models

kr

ab

information. The network architectures

are either single layer or two layers .

ha

The

Linear

associator

model,

is a feed forward type

network,

consists, two layers of processing units, one serving as the input layer while the other as the output layer.

The

Hopfield model, is a single layer of processing elements where

each unit is connected to every other unit in the network other than itself.

The

Bi-directional associative memory (BAM) model

is similar to

that of linear associator but the connections are bidirectional. In this section, the neural network architectures of these models and the construction of the corresponding connection weight matrix W of

the associative memory are illustrated.


15

de

rs

.in

fo

.m

Linear Associator Model (two layers)


It is a feed-forward type network where the output is produced in a single feed-forward computation. The model consists of two layers of processing units, one serving as the input layer while the other as the output layer. The inputs via a series of weights. to every output. are directly connected to the outputs, The links carrying weights connect every input

ha

kr

ab

or

ty

,w

yr

ea

SC - AM models

The sum of the products of the weights and the

inputs is calculated in each neuron node. The network architecture of the linear associator is as shown below.
weights wij neurons

x1
w12

w11 w21 w22 w2m wn1 wn2 w1m

y1 y2
outputs

x2
inputs

Xn

wnm
Fig. Linear associator model

Ym

all n input units are connected to all m output units via connection

weight matrix of

W = [wij]n x m

where wij denotes the strength


th

the unidirectional connection from the i

input unit to the j

th

output unit.
the

connection

weight

matrix

stores

the

different

associated

pattern
building

pairs {(Xk, Yk) | k = 1, 2, ..., p} . an associative memory is constructing the connection

weight matrix W
[Continued in next slide]
16

such that when an input pattern is presented,

then the stored pattern associated with the input pattern is retrieved.

de

rs

.in

fo

ea

SC - AM models Encoding : The process of constructing the connection weight matrix

ty

,w

.m

[Continued from previous slide]

yr

kr

ab

is called encoding.

During encoding the weight values of correlation where


i = 1, 2, ..., m, j = 1, 2, ..., n.

or

matrix Wk for an associated pattern pair (Xk, Yk) are computed as: (wij)k = (xi)k (yj)k (xi)k is the i
th

ha

component of pattern Xk for

and

(yj)k is the j th component of pattern Yk for


Weight matrix :

Construction of weight matrix W

is accomplished
k=1

by summing those individual correlation matrices Wk, ie, W = where is the constant of proportionality, for set to 1/p to store p different associated pattern pairs.

Wk

normalizing, usually

Decoding : After memorization, the network can be used for retrieval;

the

process

of

retrieving

stored

pattern,

is

called

decoding;

given an input pattern X,

the decoding or retrieving is accomplished


j

by computing, first the net Input as input

input j stands for the weighted sum of the input or activation value of

j=1

xi w i j where

node j , for j = 1, 2, . . , n. and xi is the i th component of pattern Xk , and then determine the units Output using a bipolar output function:
Yj = +1 if input j j -1 other wise

where

is the threshold value of output neuron j .

Note: The output units behave like linear threshold units; that compute a weighted sum of the input and produces a -1 or +1 depending whether the weighted sum is below or above a certain threshold value.
Performance :

The input pattern may contain errors and noise, or an

incomplete version of some previously encoded pattern. When such corrupt input pattern is presented, the network will retrieve the stored pattern that is closest to actual input pattern. Therefore, the linear associator is robust and fault tolerant. The presence of noise or error results in a mere decrease rather than total degradation in the performance of the network.
17

de

rs

.in

fo

.m

Auto-associative Memory Model - Hopfield model (single layer)


Auto-associative pattern pairs, memory are stored means in patterns rather than is associated one-layer memory. Hopfield model

kr

ab

or

ty

,w

yr

ea

SC - AM models

unidirectional auto-associative memory.


Hopfield network
W14 W13 W12 W21 connection weights wij W24 W23 W32 W31 W34 W43 W42 W41

ha

alternate view

1
I1

2
I2

3
I3

neurons

I4 inputs I

V1

V3 V2 outputs V

V4

Fig. Hopfield model with four units


the model consists, a single layer of processing elements where each

unit is connected to every other unit in the network but not to itself.
connection

weight between or from neuron j to i is given by a

number wij. The collection of all such numbers are represented by the weight matrix W which is square and symmetric, ie, w for i, j = 1, 2, . . . . . , m.
each
ij

=wji

unit has an external input I which leads to a modification


m

in the computation of the net input to the units as


input j =

i=1

xi w i j + I j

for j = 1, 2, . . ., m.

and xi is the i th component of pattern Xk


each unit acts as both input and output unit. Like linear associator,

a single associated pattern pair is stored by computing the weight matrix as Wk =


18

X k Yk

where XK = YK

[Continued in next slide]

de

rs

.in

fo

ea

SC - AM models [Continued from previous slide] Weight Matrix :

.m

yr

,w

Construction of weight matrix W

is accomplished by
k=1

ty

kr

ab

summing those individual correlation matrices, ie, W = to store model is


p different associated pattern pairs. Since

Wk

where

or

is the constant of proportionality, for normalizing, usually set to 1/p the Hopfield an auto-associative memory model, it is the patterns

ha

rather than associated pattern pairs, are stored in memory.


Decoding : After memorization, the network can be used for retrieval;

the process of retrieving a stored pattern, is called decoding; given an input pattern X, the decoding or retrieving
j

computing, first the net Input as input for j = 1, 2, ..., n. and xi is the i
+1 if input j j -1 other wise
th

stands for the weighted sum of the input or activation value of node then determine the units Output using a bipolar output function:
Yj =

j=1

is

accomplished

by

xi w i j where input j
j,

component of pattern Xk , and

where

is the threshold value of output neuron j . that compute a

Note: The output units behave like linear threshold units; weighted sum is below or above a certain threshold value.

weighted sum of the input and produces a -1 or +1 depending whether the

Decoding in the Hopfield model is achieved by a collective and recursive relaxation search for a stored pattern given an initial stimulus pattern. Given an input pattern X, decoding is accomplished by computing the net input to the units and determining the output of those units using the output function to produce the pattern X'. The pattern X' is then fed back to the units as an input pattern to produce the pattern X''. The pattern X'' is again fed back to the units to produce the pattern X'''. The process is repeated until the network stabilizes on a stored pattern where further computations do not change the output of the units. In the next section, the working of an auto-correlator : patterns, recall a pattern from the stored patterns recognize a noisy pattern are explained.
19

how to store and how to

de

rs

.in

fo

.m

Bidirectional Associative Memory (two-layer)


Kosko (1988) extended by incorporating as The an network auto-associations memories. the well as Hopfield model, which is single layer, to perform on recurrent the stored associative additional structure layer of the

kr

ab

or

ty

,w

yr

ea

SC - AM models

hetero-associations

ha

bidirectional

memory model is similar to that of the linear associator but the connections are bidirectional; i.e.,
wij = wji ,

for i = 1, 2, . . . , n
neurons

and

j = 1, 2, . . . , m. neurons

weights wij

x1
w12

w11 w21 w22 w2m wn1 wn2 w1m

y1 y2
outputs

x2
inputs

Xn

wnm

Ym

Fig. Bidirectional Associative Memory model


In the bidirectional associative memory, a single associated pattern

pair is stored by computing the weight matrix as Wk =


the

X k Yk .

construction of the connection weight matrix W, to store p pattern pairs simultaneously, is accomplished

different associated i.e., W = where


20

by summing up the individual correlation matrices Wk ,


k=1

Wk

is the proportionality or normalizing constant.

de

rs

.in

fo

,w

.m

3. Auto-associative Memory (auto-correlators)


ty

In the previous section, the structure of


rather than associated pattern pairs, are

yr

ea

SC - AM

auto correlator

the Hopfield model has been


stored in memory.

or

ha

kr

explained. It is an auto-associative memory model which means patterns, In this is section, the working of an auto-associative memory (auto-correlator) illustrated using some examples.
Working of an auto-correlator :
how to store the patterns, how to retrieve / recall a pattern from the stored patterns, how to recognize a noisy pattern
21

ab

and

de

rs

.in

fo

.m

How to Store Patterns : Example


Consider the three bipolar patterns an auto-correlator. A1 = (-1, 1 , -1 , 1 ) A2 = ( 1, 1 ,
1 , -1 )

yr

ea

SC - AM

auto correlator

ha

kr

ab

or

ty

A1 , A2, A3

to be stored as

,w

A3 = (-1, -1 , -1 , 1 ) Note that the outer product of two vectors U = UT V =


U1 U2 U3 U4

and

V is
U1V3 U2V3 U3V3 U4V3

V1 V2 V3

U1V1 U2V1 U3V1 U4V1

U1V2 U2V2 U3V2 U4V2

Thus, the outer products of each of these three patterns are


j
T

A1 , A2, A3

bipolar

[A1 ] 4x1 [A1 ] 1x4

=
i

1 -1 1 -1 -1 1 -1 1 1 -1 1 -1 -1 1 -1 1

j
T

[A2 ] 4x1 [A2 ] 1x4

=
i

1 1 1 -1 1 1 1 -1 1 1 1 -1 -1 -1 -1 1

j
T

[A3 ] 4x1 [A3 ] 1x4

=
i

1 1 1 -1 1 1 1 -1 1 1 1 -1 -1 -1 -1 1

Therefore the connection matrix


T = [t i j ] =

is

j
3 1 3 -3 1 3 1 -1 3 1 3 -3 -3 -1 -3 3

i=1

[Ai ] 4x1 [Ai ] 1x4 =

This is how the patterns are stored .


22

de

rs

.in

fo

.m

Retrieve a Pattern from the Stored Patterns (ref. previous slide)


The previous slide shows the connection matrix
T i
3 1 3 -3 1 3 1 -1 3 1 3 -3 -3 -1 -3 3

yr

ea

SC - AM

auto correlator

or

ty

of

the

three

,w

ha

kr

ab

bipolar patterns A1 , A2, A3 stored as


T = [t i j ] =

i=1

[Ai ] 4x1 [Ai ] 1x4 = j

and one of the three stored pattern is A2 = ( 1, 1 , 1 , -1 ) ai


Retrieve or recall of this pattern A2 from the three stored patterns. The recall equation is

aj

new

= (ai t i j , aj

old

for j = 1 , 2 , . . . , p = ai t i j and

Computation for the recall equation A2 yields then find


i= 1 1x3 1x1 1x3 1x-3 + + + + 2 1x1 1x3 1x1 1x-1
old

3 + + + + 1x3 1x1 1x3 1x-3 + + + +

4 -1x-3 -1x-1 -1x-3 -1x3


=

= ai = ai = ai = ai

t i , j=1 t i , j=2 t i , j=3 t i , j=4


new

= 10 6 = 10 = -1

1 1 1 -1

Therefore

aj

a1 a2 a3 a4

new

= (ai t i j = (10 , 1) = (6 ,
1)

, aj

) for j = 1 , 2 , . . . , p

is ( , )

new new new

= (10 , 1) = (-1 , -1)

The values of is the vector pattern ( 1, 1 , 1 , -1 ) which is A2 . This is how to retrieve or recall a pattern from the stored patterns. Similarly, retrieval of vector pattern A3 as
( a1
23 new

, a2

new

, a3

new

a 4 , ) = ( -1, -1 , -1 , 1 ) = A3

new

de

rs

.in

fo

.m

Recognition of Noisy Patterns (ref. previous slide)


Consider a vector
find

yr

ea

SC - AM

auto correlator

or

ty

A'

= ( 1,

1 ,

1 , 1 )

which is a noisy presentation to the stored patterns where

,w

kr

ab

of one among the stored patterns. the proximity of the noisy vector using Hamming distance measure.
note that the Hamming distance (HD) of a vector X from Y,

ha

X = (x1 , x2 , . . . , xn) HD (x , y) =

and Y = ( y1, y2 , . . . , yn) is given by

i=1

| (xi - yi ) |

The HDs of A' from each of the stored patterns A1 , A2, A3 are
HD (A' , A1) = |(x1 - y1 )|, = 4 HD (A' , A2) HD (A' , A3) = 2 = 6 |(x2 - y2)|, |(x3 - y3 )|, |(x4 - y4 )| |(1 - (-1) )|, |(1 - 1)|

= |(1 - (-1))|, |(1 - 1)|,

Therefore the vector A' is closest to A2 and so resembles it. In other words the vector A' is a noisy version of vector A2. Computation of recall equation using vector A' yields :
i=

= ai = ai = ai = ai

t i , j=1 t i , j=2 t i , j=3 t i , j=4


new

1 1x3 1x1 1x3 1x-3

+ + + +

2 1x1 1x3 1x1 1x-1


old

+ + + +

3 1x3 1x1 1x3 1x-3

+ + + +

4 1x-3 1x-1 1x-3 1x3

= 4 = 4 = 4 = -4

1 1 1 -1

Therefore

aj

a1 a2 a3 a4

new

= (ai t i j = (4 , 1) = (4 , 1) = (4 , 1) = (-4 , -1)

, aj

) for j = 1 , 2 , . . . , p

is ( , )

new new new

The values of is the vector pattern ( 1, 1 , 1 , -1 ) which


an autocorrelator
24

is A2 .

Note : In presence of noise or in case of partial representation of vectors, results in the refinement of the pattern or removal of noise to retrieve the closest matching stored pattern.

de

rs

.in

fo

,w

or

ty

.m

4. Bidirectional Hetero-associative Memory The Hopfield one-layer unidirectional auto-associators have been discussed in previous section. Kosko (1987) extended this network to two-layer bidirectional structure called Bidirectional Associative Memory (BAM) which can achieve hetero-association. The important performance attributes of the BAM is its ability to recall stored pairs particularly in the presence of noise. Definition : If the associated pattern pairs (X, Y) are different model recalls a pattern Y given a pattern X or vice-versa, termed as hetero-associative memory. This section illustrates the bidirectional associative memory : and if the then it is
yr

ha

kr

ab

Operations (retrieval, addition and deletion) , Energy Function (Kosko's correlation matrix, incorrect recall of pattern), Multiple training encoding strategy (Wang's generalized correlation matrix).
25

ea

SC - Bidirectional hetero AM

de

rs

.in

fo

.m

4.1 Bidirectional Associative Memory (BAM) Operations BAM is a two-layer nonlinear neural network. Denote one layer as field A as field B with elements Bi. The basic coding procedure of the discrete BAM is as follows. Consider N training pairs { (A1 , B1) , (A2 , B2), . . , (Ai , Bi), . . (AN , BN) } where Ai = (ai1 , ai2 , . . . , ain) and Bi = (bi1 , bi2 , . . . , bip) and aij , bij are either in ON or OFF state.
in binary mode ,

or

ty

,w

yr

ea

SC - Bidirectional hetero AM

ha

kr

ab

with

elements Ai

and

the

other layer

ON = 1 and OFF = ON = 1 and OFF

and
N

in bipolar mode,

= -1 M0 =

the original correlation matrix of the BAM is

where Xi = (xi1 , xi2 , . . . , xin) and Yi = (yi1 , yi2 , . . . , yip) and xij(yij) is the bipolar form of aij(bij) The energy function E for the pair ( , ) and correlation matrix M is
E= -M
T

i=1

[ Xi ] [ Yi ]

With this background, the decoding processes, means to retrieve nearest pattern pairs, the pattern pairs are illustrated in the next few slides.
26

the operations
deletion of

and the addition and

de

rs

.in

fo

.m

Retrieve the Nearest of a Pattern Pair, given any pair


(ref : previous slide)
Example

ab

or

ty

,w

yr

ea

SC - Bidirectional hetero AM

ha

Retrieve the nearest of (Ai , Bi) pattern pair,

given any pair ( , ) .

kr

The methods and the equations for retrieve are :


start with an initial condition which is any given pattern pair ( , ), determine a finite sequence of pattern pairs (' , ' ) , (" , " ) . . . .

until an equilibrium point (f , f ) is reached, where


T

' = ( M ) " = ( ' M )

and ' = ( ' M and " = ( '' M


T

) )

(F) = G = g1 , g2 , . . . . , gr , F = ( f1 , f2 , . . . . , fr )
M is correlation matrix 1 0 (binary) gi = -1 (bipolar) previous g i , fi = 0 , fi < 0 if fi >0

Kosko

has

proved

that

this

process

will

converge

for

any

correlation matrix M.
27

de

rs

.in

fo

.m

Addition and Deletion of Pattern Pairs


Given a set of pattern pairs (Xi , Yi) , for i = 1 , 2, . . . , n of correlation matrix M :
a

yr

ea

SC - Bidirectional hetero AM

kr

ab

or

ty

and a set

,w

new pair (X' , Y') can be added

or

ha

an existing pair (Xj , Yj) can be deleted from the memory model.

Addition : add a new pair (X' , Y') ,

to existing correlation matrix M ,

them the new correlation matrix Mnew is given by


Mnew = X1 Y1
T

+ X1 Y1 + . . . . + Xn Yn

+ X'

Y'

Deletion : subtract the matrix corresponding to an existing pair (Xj , Yj)

from the correlation matrix M , them the new correlation matrix Mnew is given by
Mnew = M

( Xj

Yj )

Note :

The addition

and deletion of information is as a human memory

similar to the learning

functioning of

the system

exhibiting

and forgetfulness.
28

de

rs

.in

fo

.m

4.2 Energy Function for BAM


Note : A system that changes with time is a dynamic system. There are two types of dynamics in a neural network. During training phase it iteratively update weights and during production phase it asymptotically converges to the solution patterns. State is a collection of qualitative and qualitative items that characterize the system e.g., weights, data flows. The Energy function (or Lyapunov function) is a bounded function of the system state that decreases with time and the system solution is the minimum energy.

ha

kr

ab

or

ty

,w

yr

ea

SC - Bidirectional hetero AM

Let a pair (A , B) defines the state of a BAM.


to store a pattern, the value of the energy function for that pattern

has to occupy a minimum point in the energy landscape.


also

adding

new

patterns

must

not

destroy

the

previously

stored patterns. The stability of a BAM can be proved by identifying the energy function E with each state (A , B) .
For auto-associative memory : the energy function is

E(A)

= - AM A

For bidirecional hetero associative memory : the energy function is

E(A, B) = - AM B

; for a particular case A = B ,

it corresponds

to Hopfield auto-associative function. We wish to retrieve the nearest of (Ai , Bi) pair, when any ( , ) pair is presented as initial condition to BAM. The neurons change when it their states until a bidirectional stable state (Af , Bf) is reached. Kosko has shown that such stable state is reached for any matrix M corresponds to local minimum of the energy function. decoding lowers the energy
T

Each cycle of

E
T

if

the

energy

function for any point

( , ) is given by E = M If the energy


(Ai , Bi) E = Ai M Bi

evaluated using coordinates of the pair

does not constitute a local minimum, then the point cannot

be recalled, even though one starts with = Ai. Thus Kosko's encoding method does not ensure that the stored pairs are at a local minimum.
29

de

rs

.in

fo

.m

Example : Kosko's BAM for Retrieval of Associated Pair


The working of Kosko's BAM for retrieval of associated pair. Start with X3, and hope to retrieve the associated pair Y3 . Consider N = 3 pattern pairs (A1 , B1 ) , (A2 , B2 ) , (A3 , B3 ) given by
A1 = A2 = A3 = ( ( ( 1 0 0 0 1 0 0 1 1 0 0 0 0 0 1 1 0 1 ) ) ) B1 = B2 = B3 = ( ( ( 1 1 0 1 0 1 0 1 1 0 0 1 0 0 0 ) ) )

ha

kr

ab

or

ty

,w

yr

ea

SC - Bidirectional hetero AM

Convert these three binary pattern to bipolar form replacing 0s by -1s.


X1 = X2 = X3 = ( 1 -1 -1 -1 -1 1 1 -1 1 1 1 ) ) ) Y1 = Y2 = Y3 = ( ( 1 1 -1 -1 -1 1 -1 -1 1 1 -1 1 ) ) ) ( -1 1 -1 -1 -1 1 -1

( -1 -1

( -1

The correlation matrix M is calculated as 6x5 matrix


1 1 -3 -1 1 -1 3 1 1

1 -3

M =

X1 Y1 +

X2 Y2 +

X3 Y3

-1 -1

1 -1 1 3 3 1

-1 -1 -1 -3 -1 1 1

3 -1

1 -1

Suppose we start with = X3, and we hope to retrieve the associated pair
Y3 .

The calculations for the retrieval of Y3 yield : M = ( -1 -1 1 -1 1 1 ) ( M ) = ( -6 6 6 6 -6 ) ( M) = ' = ( -1 1 1 1 -1 ) ' M T = ( -5 -5 5 -3 7 5 )

(' M T) = ( -1 -1 1 -1 1 1 ) = ' ' M = ( -1 -1 1 -1 1 1 ) M = ( -6 6 6 6 -6 ) (' M) = " =


= ' ( -1 1 1 1 1 -1 )

This retrieved patern ' is same as Y3 . Hence,


30

(f , f) = (X3 , Y3 )

is correctly recalled, a desired result .

de

rs

.in

fo

.m

Example : Incorrect Recall by Kosko's BAM


The Working of incorrect recall by Kosko's BAM. Start with X2, and hope to retrieve the associated pair Y2 . Consider N = 3 pattern pairs (A1 , B1 ) , (A2 , B2 ) , (A3 , B3 ) given by
A1 = A2 = A3 = ( 1 0 0 1 1 1 0 0 0 ) ( 0 1 1 1 0 0 1 1 1 ) ( 1 0 1 0 1 1 0 1 1 ) B1 = B2 = B3 = ( ( ( 1 1 1 0 0 0 0 1 0 1 0 0 0 0 0 0 0 1 0 1 0 1 0 0 1 0 1 ) ) )

ha

kr

ab

or

ty

,w

yr

ea

SC - Bidirectional hetero AM

Convert these three binary pattern to bipolar form replacing 0s by -1s.


X1 = X2 = X3 = ( 1 -1 -1 1 1 1 -1 -1 -1 ) ( -1 1 1 1 -1 -1 1 1 1 ) ( 1 -1 1 -1 1 1 -1 1 1 ) Y1 = Y2 = Y3 = ( ( 1 1 1 -1 -1 -1 -1 1 -1 ) 1 -1 -1 -1 -1 -1 -1 -1 1 ) )

( -1 1 -1 1 -1 -1 1 0 1

The correlation matrix M is calculated as 9 x 9 matrix


M = X1 Y1
-1 1 -1 3 -1 -1 1 -1 -1
T

+
3 -3 -1 -1 3 3 -3 -1 -1

X2 Y2 +
1 -1 -3 1 1 1 -1 -3 -3 1 -1 1 -3 1 1 -1 1 1 -1 1 -1 -1 -1 -1 1 -1 -1

X3 Y3
-1 1 -1 -1 -1 -1 1 -1 -1 1 -1 1 -3 1 1 -1 1 1 1 -1 -3 1 1 1 -1 -3 -3 -1 1 3 -1 -1 -1 1 3 3

(Continued in next slide)


31

de

rs

.in

fo

ea

SC - Bidirectional hetero AM

.m ty ,w

[Continued from previous slide]

yr

Let the pair (X2 , Y2 ) be recalled.


X2 = ( -1 1 1 1 -1 -1 1 1 1 ) Y2 = ( 1 -1 -1 -1 -1 -1 -1 -1 1 )

ha

kr

ab

or

Start with = X2, and hope to retrieve the associated pair Y2 . The calculations for the retrieval of Y2 yield : M = ( ( M) = ( (' M ) = ( ' M = ( (' M) = (
= '
T

5 1

-19 -13 -5 -1 -1 5 1 -1 5 1

1 1

1 1

-5 -13 13 -1 -1 5 1 1 5 1

) ) ) =

' ' "

' M T = ( -11 11
-1 5 1 1

-11 -11 11 -1 1 1 -1 1 1

1 -1

)
)

-19 -13 -5 -1 -1 -1

-5 -13 13 -1 1

Here " = ' . Hence the cycle terminates with F = ' = F = ' =
( ( -1 1 1 -1 1 -1 1 -1 -1 1 -1 1 1 1 -1 1 1 ) =

X2

-1

Y2

But ' is not Y2 . Thus the vector pair (X2 , Y2) is not recalled correctly by Kosko's decoding process.
( Continued in next slide )
32

de

rs

.in

fo

ea

SC - Bidirectional hetero AM

.m ty ,w w

[Continued from previous slide]

yr

Check with Energy Function : Compute the energy functions

ha

kr

for the coordinates of pair (X2 , Y2) , the energy E2 = - X2 M Y2 = -71 for the coordinates of pair (F , F) , the energy EF = - F M F = -75 However, the coordinates of pair is "one Hamming distance" ' Y2
= ( 1 (X2 , Y2)
T

ab

or

is not at its local E at a point which

minimum can be shown by evaluating the energy

way from Y2 . To do this consider a point


1 -1 -1 -1 1 )

-1 -1 -1

where the fifth component -1 of Y2 has been changed to 1. Now E = - X2 M Y2


T

= - 73

which is lower than E2 confirming the hypothesis that (X2 , Y2) is not at its local minimum of E. Note : The correlation matrix M used by Kosko does not that pair Pi can be recalled of the energy surface.
33

guarantee

the energy of a training pair is at its local minimum. Therefore , a if and only if this pair is at a local minimum

de

rs

.in

fo

.m

4.3 Multiple Training Encoding Strategy


yr

ea

SC - Bidirectional hetero AM

or

ty

Note : (Ref. example in previous section).

Kosko extended

the unidirectional

,w

ab

auto-associative to bidirectional associative processes, using correlation matrix M=

Xi

ha

kr

Yi computed from the pattern pairs. The system proceeds to nearest pair given any pair ( , ), with the help of recall

retrieve

the

equations. However, Kosko's encoding method does not ensure that the stored pairs are at local minimum and hence, results in incorrect recall.

Wang and other's, introduced multiple training encoding strategy ensures the correct

which

recall of pattern pairs. This encoding strategy is an


T

enhancement / generalization of Kosko's encoding strategy. The Wang's generalized correlation matrix is M = qi as pair weight for
T Xi

Xi

Yi

where qi

is viewed

Yi

as

positive real numbers.

It denotes the

minimum number of times for using a pattern pair (Xi , Yi) for training to guarantee recall of that pair. To recover a pair (Ai , Bi) using multiple training of order q, let us

augment or supplement matrix M with a matrix P defined as


P = (q 1) Xi
T

Yi

where (Xi , Yi) are the bipolar form of (Ai , Bi).

The augmentation implies adding (q - 1) more pairs located at (Ai , Bi) to the existing correlation matrix. As a result the energy E' can reduced to an arbitrarily low value by a suitable choice of q. This also ensures that the energy at (Ai , Bi) does not exceed at points which are one Hamming distance away from this location. The new value of the energy function E evaluated at (Ai , Bi) then becomes
E' (Ai , Bi) = Ai M Bi
T

(q 1) Ai Xi

Yi Bi

The

next

few

slides

explains

the step-by-step implementation of

Multiple training encoding strategy for the recall of three pattern pairs
(X1 , Y1 ) , (X1 , Y1 ) , (X1 , Y1 ) using one and same augmentation matrix M . Also an algorithm to summarize the complete

process of multiple

training encoding is given.


34

de

rs

.in

fo

,w

.m

Example : Multiple Training Encoding Strategy


The working of multiple training encoding strategy which ensures the correct recall of pattern pairs. Consider N = 3 pattern pairs (A1 , B1 ) , (A2 , B2 ) , (A3 , B3 ) given by
A1 = A2 = A3 = ( 1 0 0 1 1 1 0 0 0 ) ( 0 1 1 1 0 0 1 1 1 ) ( 1 0 1 0 1 1 0 1 1 ) B1 = B2 = B3 = ( ( ( 1 1 1 0 0 0 0 1 0 1 0 0 0 0 0 0 0 1 0 1 0 1 0 0 1 0 1 ) ) )

ha

kr

ab

or

ty

yr

ea

SC - Bidirectional hetero AM

Convert these three binary pattern to bipolar form replacing 0s by -1s.


X1 = X2 = X3 = ( 1 -1 -1 1 1 1 -1 -1 -1 ) ( -1 1 1 1 -1 -1 1 1 1 ) ( 1 -1 1 -1 1 1 -1 1 1 ) Y1 = Y2 = Y3 = ( ( 1 1 1 -1 -1 -1 -1 1 -1 ) 1 -1 -1 -1 -1 -1 -1 -1 1 ) )

( -1 1 -1 1 -1 -1 1 0 1

Let the pair (X2 , Y2) be recalled.


X2 = ( -1 1 1 1 -1 -1 1 1 1 )
T

Y2 =

1 -1 -1 -1 -1 -1 -1 -1 1

Choose q=2, so that P = X2 Y2 , the augmented correlation matrix M becomes M = X1 Y1


2 0 4 -2 -2 2 0 0 4 -4 -2 -2 4 4 -4 -2 -2
T

+ 2 X2 Y2 +
2 -2 -4 0 2 2 -2 -4 -4 2 -2 0 -4 2 2 -2 0 0 0 0 -2 -2 0 0 0 -2 -2 0 0 -2 -2 0 0 0 -2 -2

X3 Y3
2 -2 0 -4 2 2 -2 0 0 2 -2 -4 0 2 2 -2 -4 -4 -2 2 4 0 -2 -2 2 4 4

( Continued in next slide )


35

de

rs

.in

fo

ea

SC - Bidirectional hetero AM

.m

[Continued from previous slide]

yr

ty

,w

Now give = X2, and see

that the corresponding pattern pair = Y2

ha

kr

ab

is correctly recalled as shown below. M = ( 14 -28 -22 -14 -8 ( M) = ( (' M T) = ( (' M) = ( Here
1 -1 -1 -1 -1 -8 -14 -22 22 -1 -1 -1 1 ) ) ) =

or

' ' "

' M T = ( -16 16 18 18 -16 -16 16 18 18


-1 1 1 1 -1 -1

1 -1

)
)

' M = ( 14 -28 -22 -14 -8


1 -1 -1 -1 1

-8 -14 -22 23 1 -1 1

" = ' . Hence the cycle terminates with F = ' = F = ' =


( ( -1 1 1 -1 1 -1 1 -1 -1 1 -1 1 1 1 -1 1 1 ) =

X2

-1

= Y2 correlation

Thus,

(X2

Y2 )

is correctly

recalled,

using

augmented

matrix M .

But, it is not possible to recall (X1 , Y1) using the same

matrix M as shown in the next slide.


( Continued in next slide )
36

de

rs

.in

fo

ea

SC - Bidirectional hetero AM

.m ty ,w w kr ab or

[Continued from previous slide]

yr

Note : The previous slide showed that the pattern pair (X2 , Y2 ) is correctly recalled, using augmented correlation matrix
M = X1 Y1 + 2 X2 Y2 + X3 Y3
T T T

ha

but

then

the

same

matrix

can not

recall

correctly

the

other

pattern pair (X1 , Y1 ) as shown below.


X1 = ( 1 -1 -1 1 1 1 -1 -1 -1 ) Y1 = ( 1 1 1 -1 -1 -1 -1 1 -1 )

Let = X1 and to retrieve the associated pair Y1 the calculation shows M = ( ( M) = ( (' M T) = ( (' M) = (
-6 -1 24 22 1 1 6 1 4 1 4 1 6 1 22 -22 1 -1 ) ) =

' ' "

' M T = ( 16 -16 -18 -18 16 16 -16 -18 -18 )


1 -1 -1 -1 1 8 1 1 8 1

-1 1

-1

-1

)
)

' M = ( -14 28 22 14
-1 1 1 1

14 22 -22 1 -1

Here " = ' . Hence the cycle terminates with F = ' = F = ' =
( ( 1 -1 -1 1 -1 1 -1 1 1 1 1 1

-1 1

-1 1

-1 -1

X1

Y1

Thus, the pattern pair (X1 , Y1 ) is not correctly recalled, using augmented correlation matrix M. To tackle this problem, the augmented
37

correlation

matrix

needs to be further

by multiple training of (X1 , Y1 ) as shown in the next slide.

( Continued in next slide )

de

rs

.in

fo

ea

SC - Bidirectional hetero AM

.m

[Continued from previous slide]

yr

ty

,w

The previous slide shows that

pattern pair (X1

Y1 )

cannot be recalled

ha

kr

ab

under the same augmentation matrix M that is able to recall (X2 , Y2). However, this problem can be solved by multiple training of (X1
,

or

Y 1)

which yields a further change in M to values by defining


M = 2 X1 Y1 + 2 X2 Y2 + X3 Y3
-1 1 -1 5 -1 -1 1 -1 -1 5 -5 -3 -1 5 5 -5 -3 -3 3 -3 -5 1 3 3 -3 -5 -5 1 -1 1 -5 1 1 -1 1 1 -1 1 -1 -3 -1 -1 1 -1 -1 -1 1 -1 -3 -1 -1 1 -1 -1 1 -1 1 -5 1 1 -1 1 1 3 -3 -5 1 3 3 -3 -5 -5 -3 3 5 -1 -3 -3 3 5 5
T T T

Now observe in the next slide that all three pairs can be correctly recalled.
( Continued in next slide )
38

de

rs

.in

fo

ea

SC - Bidirectional hetero AM

.m

[ Continued from previous slide ]

yr

,w

Recall of pattern pair


X1 = (

(X1 , Y1 )
Y1 = ( 1 1 1 -1 -1 -1 -1 1 -1 )

ab

or

1 -1 -1 1 1 1 -1 -1 -1 )

ty

ha

Let = X1 and to retrieve the associated pair Y1 the calculation shows


M ( M) (' M T ) (' M T ) ' M (' M) = = = = = = ( 3 33 31 -3 -5 -5 -3 31 -31 ( 1 1 1 -1 -1 -1 -1 1 -1 ( 13 -13 -19 23 13 13 -13 -19 -19 ( 1 -1 -1 1 1 1 -1 -1 -1 ( 3 33 31 -3 -5 -5 -3 31 -31 ( 1 1 1 -1 -1 -1 -1 1 -1 ) ) ) ) ) ) = = = ' ' "

kr

Here " = ' . Hence the cycle terminates with


F = F = ' ' = = ( ( 1 1 -1 1 -1 1 1 -1 1 -1 1 -1 -1 -1 -1 1 -1 -1 ) ) = = X1 Y1

Thus, the pattern pair (X1 , Y1 ) is correctly recalled


Recall of pattern pair
X2 =

(X2 , Y2 )
Y2 = ( 1 -1 -1 -1 -1 -1 -1 -1 1 )

( -1 1 1 1 -1 -1 1 1 1 )

Let = X2 and to retrieve the associated pair Y2 the calculation shows


M ( M) (' M T ) (' M T ) ' M (' M) = = = = = = ( 7 -35 -29 -7 -1 -1 -7 -29 29 ( 1 -1 -1 -1 -1 -1 -1 -1 1 ( -15 15 17 19 -15 -15 15 17 17 ( -1 1 1 1 -1 -1 1 1 1 ( 7 -35 -29 -7 -1 -1 -7 -29 29 ( 1 -1 -1 -1 -1 -1 -1 -1 1 ) ) ) ) ) ) = = = ' ' "

Here " = ' . Hence the cycle terminates with


F = F = ' ' = = ( ( -1 1 1 -1 1 -1 1 -1 -1 -1 -1 -1 1 -1 1 -1 1 1 ) ) = = X2 Y2

Thus, the pattern pair (X2 , Y2 ) is correctly recalled


Recall of pattern pair
X3 = (

(X3 , Y3 )
Y3 = ( -1 1 -1 1 -1 -1 1 0 1 )

1 -1 1 -1 1 1 -1 1 1 )

Let = X3 and to retrieve the associated pair Y3 the calculation shows


M ( M) (' M T ) (' M T ) ' M (' M) = = = = = = ( -13 17 -1 13 -5 -5 13 -1 1 ( -1 1 -1 1 -1 -1 1 -1 1 ( 11 -11 27 -63 11 11 -11 27 27 ( 1 -1 1 -1 1 1 -1 1 1 ( -13 17 -1 13 -5 -5 13 -1 1 ( -1 1 -1 1 -1 -1 1 -1 1 ) ) ) ) ) ) = = = ' ' "

Here " = ' . Hence the cycle terminates with


F = ' F = ' = = ( ( 1 -1 -1 1 1 -1 -1 1 1 -1 1 -1 -1 1 1 0 1 1 ) ) = = X3 Y3

Thus, the pattern pair (X3 , Y3 ) is correctly recalled


( Continued in next slide )
39

de

rs

.in

fo

ea

SC - Bidirectional hetero AM

.m ty ,w w

[Continued from previous slide]

yr

Thus, the multiple training encoding strategy ensures the correct recall of a pair for a suitable augmentation of
N

kr

ab

M . The generalization of

the

or

correlation matrix, for the correct recall of all training pairs, is written as
M =

ha

i=1

qi Xi

Yi

where qi 's are +ve

real numbers.

This modified

correlation matrix

is called generalized correlation matrix.

Using one and same augmentation matrix M, it is possible to recall all the training pattern pairs .
40

de

rs

.in

fo

.m

Algorithm (for the Multiple training encoding strategy)


To summarize the complete algorithm is given below.
i Algorithm Mul_Tr_Encode ( N , X i , Y i ) , q

yr

ea

SC - Bidirectional hetero AM

ha

kr

ab

or

ty

process of multiple training encoding an

,w

where

N : Number of stored patterns set


i X , = X Y = q : Y i the bipolar pattern pairs 2, . . . . , X 2, . . . . , Y N ) X N ) Y i = ( x i 1 ,x i where X , . . .x i n ) 2 j = ( x j 1 , x j , . . . x j where Y n 2

( (

1, X 1 , Y

is the weight vector (q1 , q2 , . . . . , qN ) M [0]

Step 1 Initialize correlation matrix M to null matrix Step 2 Compute the correlation matrix M as

For i 1 to N
i ) ( X i ) M M [ qi Transpose ( X

end

(symbols

matrix addition, matrix multiplication, and scalar multiplication)

Step 3 Read input bipolar pattern A

Step 4 Compute

A_M where

A_M

M A

Step 5 Apply threshold function

to

A_M

' to get B

ie

' B

( A_M )

where is defined as (F) = G = g1 , g2, . . . . , gn


Step 6 Output end
41

' B

is the associated pattern pair

de

rs

.in

fo

.m

5. References : Textbooks
yr ty

1. "Neural

ea

SC AM References

ha

kr

Network, Fuzzy Logic, and Genetic Algorithms - Synthesis and Applications", by S. Rajasekaran and G.A. Vijayalaksmi Pai, (2005), Prentice Hall, Chapter 4, page 87-116. and Sanjay Ranka, (1996), MIT Press, Chapter 6, page 217-263.

ab

or

,w C

2. "Elements of Artificial Neural Networks", by Kishan Mehrotra, Chilukuri K. Mohan 3. "Fundamentals of Neural Networks: Architecture, Algorithms and Applications", by
Laurene V. Fausett, (1993), Prentice Hall, Chapter 3, page 101-152.

4. "Neural Network Design", by Martin T. Hagan, Howard B. Demuth and Mark


Hudson Beale, ( 1996) , PWS Publ. Company, Chapter 13, page 13-1 to 13-37. Chapter 6-7, page 143-208.

5. "An Introduction to Neural Networks", by James A. Anderson, (1997), MIT Press, 6. Related documents from open source, mainly internet. An exhaustive list is
being prepared for inclusion at a later date.
42

Vous aimerez peut-être aussi