Vous êtes sur la page 1sur 9

Associative Learning

Since NN are modelled after the


human brain, we will first explore
associative learning in humans,
then move on to the NN themselves

What is associative
learning?
Associative learning is the process by which an association
between two stimuli or a behaviour and a stimulus is learned
It is the human ability to retrieve information from applied
associated stimuli.
Example 1: you can remember a relative of yours after many
years of not seeing them and even if they have changed and
grown older we still recall who they are.
Example 2: Human memory is essentially associative. One thing
may remind us of another, and that of another, and so on. We use
a chain of mental associations to recover a lost memory. If we
forget where we left an umbrella, we try to recall where we last
had it, what we were doing, and who we were talking to. We
attempt to establish a chain of associations, and thereby to
restore a lost memory.

Association, Not isolation


The brain provides the perfect environment for
associations to prosper.
DO THIS NOW.
Close your eyes and take 15 seconds to try
visualizing only the nose on your mother's face.
Nothing else, just her nose. Now take 15 seconds
and try to visualize only the stove in your kitchen.
Nothing else just the stove. You can't do it, because
as soon as you try to imagine your mom's nose, up
comes her eyes, cheeks, and chin. And, as soon as
you try to think of only your stove, up pops the
countertops and cupboards that surround it.

There are three major reasons why your brain has difficulty
thinking of things in isolation and why it has a powerful
tendency to make associations even when you don't want it to:
A memory is not held in a single cell, as the grandmother theory of
learning once proposed, but instead in widely distributed groups of
interconnected neurons.

One neuron in a network that holds a single memory has the capacity
to be part of hundreds if not thousands of other memory networks.

Neurons that "wire together fire together." This phrase means that,
when a neuron that is part of one memory network is activated, it will
automatically fire up all other neurons in other memory networks with
which it has been previously linked.

Palvovs Dog Experiment


In this experiment, at first, the conductivity of the devices was
programmed so that only the sight of food (unconditioned stimulus)
triggers the activation of the output neuron (salivation). Before the
association is made, the hearing of the bell (neutral stimulus) does not
trigger the salivation, as the conductivity of the corresponding synapse is
below the threshold of the output neuron. When both inputs are active
simultaneously, the conductivity of the second synapse increases until it
reaches the threshold (conditioning): the association is made.
From this point, the hearing of the bell alone triggers the activation of
output neuron (conditioned stimulus). In figure 8 we can verify that
without feedback from the output neuron, the conductivity change implied
by the pre-synaptic pulses alone is not enough to create the association.
Our associative learning circuit is symmetrical, which means that there is
no difference between the input receiving the conditioning stimulus (sight
of food) and the input receiving the neutral stimulus (bell sound). The
conditioning input is the one that is initially programmed to have a
synaptic weight above the output neuron threshold.

Hebbian
In the 1940s, soon after he recognized that a memory was held not in a
single cell but in groups of "cell assemblies," the Donald O. Hebb realized
that associative learning can only be created when large groups of neural
networks are fired simultaneously. In his now famous Essay on Mind, he
wrote, "These self-re-exciting systems (cell assemblies) could not consist
of one circuit of two or three neurons, but must have a number of
circuits... I could assume that when a number of neurons in the cortex are
excited... they tend to become inner connected, some of them at least
forming a multi-circuit closed system... The idea then (1945) was that a
precept consists of assemblies... a concept of assemblies excited centrally
by other assemblies.
Modern research has confirmed Hebb's findings, and today the idea that
associations are made, learning created, and memory cemented when
large groups of neurons fire simultaneously is often called the "Hebbian
synaptual learning rule." When neurons fire in unison, memory is
enhanced because the possibility is increased that a neuron will be
stimulated at more than one location.

Associative learning in NN
The goal of learning is to associate known
input vectors
with given output vectors. Contrary to
continuous mappings, the neighbourhood of a
known input vector x should also be mapped
to the image y of x, that is if B(x) denotes all
vectors whose distance from x (using a
suitable metric) is lower than some positive
constant , then we expect the network to map
B(x) to y. Noisy input vectors can then be
associated with the correct output.

a learning algorithm derived from biological neurons can be used to


train associative networks: it is called Hebbian learning.
There are 3 overlapping kinds of associative networks:
1. Heteroassociative networks map m input vectors x1, x2, . . . , xm in
ndimensional space to m output vectors y1, y2, . . . , ym in kdimensional space, so that xi _ yi. If x xi2 < then x _ yi. This
should be achieved by the learning algorithm, but becomes very hard
when the number m of vectors to be learned is too high.
2. Autoassociative networks are a special subset of the
heteroassociative networks, in which each vector is associated with
itself, i.e., yi = xi for i = 1, . . .,m. The function of such networks is to
correct noisy input vectors.
3. Pattern recognition networks are also a special type of
heteroassociative networks. Each vector xi is associated with the
scalar i. The goal of such a network is to identify the name of the
input pattern

Contd
associative mapping in which the network learns to produce a particular
pattern on the set of input units whenever another particular pattern is
applied on the set of input units. The associtive mapping can generally be
broken down into two mechanisms:
auto-association: an input pattern is associated with itself and the states of
input and output units coincide. This is used to provide pattern completition,
ie to produce a pattern whenever a portion of it or a distorted pattern is
presented. In the second case, the network actually stores pairs of patterns
building an association between two sets of patterns.

hetero-association: is related to two recall mechanisms which are:


nearest-neighbour recall, where the output pattern produced corresponds to the
input pattern stored, which is closest to the pattern presented, and
interpolative recall, where the output pattern is a similarity dependent interpolation
of the patterns stored corresponding to the pattern presented. Yet another paradigm,
which is a variant associative mapping is classification, ie when there is a fixed set of
categories into which the input patterns are to be classified.

Vous aimerez peut-être aussi