Académique Documents
Professionnel Documents
Culture Documents
Connectionist / Symbolic
Cognitive Architecture
Contents
2 Principles of the Integrated Connectionist / Symbolic Cognitive Architecture........................ 63
1 Symbolic structures as patterns of activation.................................................................. 65
Box 1. Symbolic structures as activation patternsRepICS illustrated .................................. 69
2 Symbolic operations as patterns of connections.............................................................. 71
Box 2. A simple grammar in a connectionist weight matrix................................................ 73
3 Optimization in neural networks................................................................................... 74
Box 3. Processing with a simple grammar ........................................................................ 76
4 Optimization in grammar I: Harmonic Grammar.......................................................... 77
5 Optimization in grammar II: Optimality Theory ........................................................... 78
6 Universal Grammar, universal grammar, and innate knowledge............................................ 79
7 Treatment of ICS principles in the book ....................................................................... 81
8 The big picture: An ICS map ....................................................................................... 81
8.1 Representations: Form........................................................................................ 82
8.2 Representations: Well-formedness; computational perspective................................ 84
8.3 Well-formedness: Empirical perspective................................................................ 87
8.4 Functions .......................................................................................................... 89
8.5 Processing.......................................................................................................... 91
8.6 Putting it all together ......................................................................................... 93
8.7 Learning ............................................................................................................ 93
References ......................................................................................................................... 96
2 Principles of the Integrated Connectionist/Symbolic Architecture 65
O
ur task in this chapter is to articulate the fundamental working hypothesis of
the Integrated Connectionist/Symbolic Cognitive Architecture (ICS). This
hypothesis was formulated roughly in Chapter 1 (13) as Cognition is massively
parallel numerical computation, some dimensions of which are organized to realize crucial
facets of symbolic computation. We now unpack this into four subhypotheses, which
will be developed and explored throughout the remainder of this book. In Section 8,
we illustrate rather concretely how these principles apply to a cognitive function, sen-
tence comprehension.
1 In the form developed in ICS theory, this principle emerged in the late 1980s and early 1990s
through a number of lines of research, the most directly relevant including work by Dolan and Dyer
(1987), Smolensky (1987; 1990, et seq.), Pollack (1988; 1990), Dolan (1989), and Plate (1991; 1994).
Much additional relevant literature is discussed in Chapters 78.
Section 1 (1)
66 Paul Smolensky and Graldine Legendre
A major contribution of the research program presented in this book is the dem-
onstration that a collection of numerical activity vectors can possess a kind of global
structure that is describable as a symbol system. But unfortunately, unlike the case of
the marching band, the global structure of spaces of activation vectors is not a kind of
higher-level organization that our perceptual systems give our awareness direct ac-
cess to. We need the imaging device provided by the appropriate mathematics to see
the higher-level structure of activation patterns.
This mathematics is quite accessible; it is introduced intuitively in Chapter 5.
That chapter presents a step-by-step development of these ideas, presuming only
knowledge of high-school algebra and geometry. Two central notions introduced
there are the vector operations of addition (denoted + or ) and the tensor product
(denoted ). The formal statement of the principle RepICS that is developed in Chap-
ter 5 (40) is previewed in (2) and illustrated in Figure 1 and Box 1.
(2) RepICS(HC): Integrated representation in higher cognition
a. In all cognitive domains, when analyzed at a lower level, mental represen-
tations are defined by the activation values of connectionist units. When
analyzed at a higher level, these representations are distributed patterns of
activityactivation vectors. For central aspects of higher cognitive do-
mains, these vectors realize symbolic structures.2
b. Such a symbolic structure s is defined by a collection of structural roles {ri}
each of which may be occupied by a filler fi; s is a set of constituents, each
a filler/role binding fi/ri.
c. The connectionist realization of s is an activity vector
s = i fi ri
which is the sum of vectors realizing the filler/role bindings. In these ten-
sor product representations, the pattern realizing the structure as a whole
is the superposition of patterns realizing all of its constituents.3 And the
pattern realizing a constituent is the tensor product of a pattern realizing
the filler and a pattern realizing the structural role it occupies.
2The intended distinction between representation and realization can be illustrated with the sound
sequence for cat and its mental encoding:
Symbol structure Sound stream
represents
[kt] cat
realizes represents
(1, 0, )
Activation pattern
3 For symbol structures realized in activity vectors, the whole is literally the sum of the parts.
2 Principles of the Integrated Connectionist/Symbolic Architecture 67
t a
In Figure 1, different shades of gray are used to indicate activity levels of net-
work units (circles). The top plane shows a pattern of activity realizing the microtree
t a
The pattern in the top plane is produced by taking the activation pattern realizing t in
the left position (middle plane) and adding it to another pattern realizing a in the
right position (bottom plane).
Previewing the development of Chapter 5, an example of the global structure re-
ferred to in (1) is illustrated in Figure 2. This figure illustrates the systematic ar-
rangement, in a four-dimensional space, of a collection of activation patterns (vectors)
Section 1 (2)
68 Paul Smolensky and Graldine Legendre
over a group of four network units. These patterns realize a simple symbol system
containing two-symbol strings such as AB, BA, and so on. The four coordinates x, y, z,
w of the point labeled AB give the activity values of the pattern realizing the symbol
string AB (each cube represents a different value of w). As explained in Section 5:1.2.2,
the four locations of the realizations of AA, AB, AC, and AD have the same interrela-
tions as those of BA, BB, BC, and BD. This is but one example of how the organization
of the space of symbol structures is mirrored in the organization of the space of activ-
ity patterns realizing those symbol structures.
The principle RepICS(HC) (2) is the foundation of all ICS theory; for this reason,
the bulk of Chapter 5 is devoted to a gentle, systematic exposition, assuming no prior
knowledge of vectors or tensors. Understanding RepICS(HC) at this intuitive level will
suffice for the remainder of the book. While a more formal conception is not needed
to follow subsequent chapters, Chapters 7 and 8 permit interested readers to pursue
representational issues further. Chapter 8 provides a more formal development of
tensor product representations, proving many of the crucial properties that are
merely asserted in Chapter 5. Chapter 7 shows how tensor product representations
provide the foundation of a large family of tightly related schemes for the connec-
tionist realization of symbolic structure. This family includes a number of schemes
that were proposed independently of tensor product representations and that seem to
be clearly unrelated to them (such as binding by synchronous neuronal firingsee
Section 7:5 and von der Malsburg and Schneider 1986; Gray et al. 1989; Eckhorn et al.
1990; Hummel and Biederman 1992; Shastri and Ajjanagadde 1993; Hummel and
Holyoak 2003; Wendelken and Shastri 2003).
y y
z w x z
z y
x x
w= ?1
= 1 CB
w ==00
AD
w == 11
w w == 22
AB
C1 A1
CA AC
D2
?
A2
B2 AA
BB
C2 BD
B1
BA
BC
Figure 2
2 Principles of the Integrated Connectionist/Symbolic Architecture 69
Filler
vectors:
A, B, X, Y
Role vectors:
r = (1 0 0) r0 = (0 1 1) r1 = (0 1 1)
The 12 units supporting the representation are the dark circles in the rectangle; the
other, unfilled, circles are imaginary units that are useful for understanding the actual
representation. The binding of a symbolsay, Yto a rolesay, the root position
ris denoted Y/r; the realization of this binding is the tensor product vector Y r.
Y is an activation pattern realizing Ya filler vector, which is to be imagined to reside
in the stack of four imaginary units to the left of the rectangle. In this example, we
take this vector to be (1 1 1 1) and draw it as shown in Figure B, with black circles
denoting activity 1, white circles 1, and gray circles 0.
r is the role vector realizing the role of root position; it is to be imagined on the
three imaginary units beneath the rectangle. r resides in the portion of this pool of
units devoted to depth 0 in the tree, and is the single number 1; the two units for
depth-1 roles are both 0. By the definition of the tensor product, the activity of the
unit within the rectangle in row and column is the activity of the filler unit in row
times the activity of the role unit in column ; this product is indicated for two units
with the dashed lines. (The three columns are separated by vertical dotted lines.)
Section 1 (2)
70 Paul Smolensky and Graldine Legendre
B Depth 0 Depth 1
Filler
vector:
+1
0
Y 1
1
+1 0
Role vector:
r = (1 0 0)
To realize A at the right daughter node, the relevant role is r1, which in this exam-
ple is realized as (0 1 1). The pattern for A is taken to be (1 1 1 1); see Figure C.
A
C Depth 0 Depth 1
Filler
vector:
A
Role vector:
r1 = (0 1 1)
To realize the tree in which Y dominates the pair B Athat is, [Y B A]we super-
impose (add) the previous two patterns (for Y/r and A/r1) and then add a third vec-
tor realizing B/r0 (defined analogously to A/r1). The result is the pattern shown on
the top horizontal line of Figure D (the 12 units making up the representation are
drawn as a horizontal line, in their left-to-right sequence within the rectangle). The
area of a circle indicates the magnitude of the activation of the corresponding net-
work unit; white denotes negative values, black positive.
This display also shows the activation pattern realizing the tree [X A B] (second
horizontal line), as well as the patterns for smaller structures, such as X in root
position (bottom line), A in left daughter position (fifth line from the bottom, labeled
A ), and the pair of daughter nodes [B A] with no label at the root (third line from
the top). Activation patterns realizing more complex structures are built up by
superimposing the patterns realizing their constituents (which are themselves built up
Figure 2 Box 1
2 Principles of the Integrated Connectionist/Symbolic Architecture 71
in the same way). As shown in this figure (and Figure 2), the set of vectors realizing
these structures has a high degree of systematicity.
D
[Y
Y B A]
[X
X
A B]
[B A]
[A B]
A
B
0 1 2 3 4 5 6 7 8 9 10 11 12 13
Units
Section 2 (4)
72 Paul Smolensky and Graldine Legendre
E G B
Agent D C
Patient
Output
Wa
Input
B
D C Aux F by
Patient E G Wp
Agent
The global structure defined by principle (4) entails, we will show, that a connection-
ist network defined by this type of weight matrix can be described at a higher level as
computing a particular recursive function (which depends on W). The input-output
mapping of such a function could be specified by a sequential symbolic program, but
such a program would not describe how the network actually computes the function: it
Figure 3 Box 1
2 Principles of the Integrated Connectionist/Symbolic Architecture 73
employs massively parallel, numerical computation, with all parts of the input and
output symbol structures being processed at once. This is illustrated in Figure 3, in
which the global structure of the weight matrix (the large gray squares) is even
somewhat discernible to the naked eye. (See also Box 2.)
The 48 matrix of weights is displayed below, with the area of each circle show-
ing the magnitude of the corresponding weight (white denotes negative weight; black,
positive; 0 weights have area 0 and are not seen).
Section 2 (4)
74 Paul Smolensky and Graldine Legendre
Figure 3 Box 2
2 Principles of the Integrated Connectionist/Symbolic Architecture 75
4 This principle is due to a number of neural network researchers in the early 1980s (Hopfield 1982,
1984; Cohen and Grossberg 1983; Hinton and Sejnowski 1983, 1986; Smolensky 1983, 1986; Golden
1986, 1988). The application to language (5b) was developed in the late 1980s (Lakoff 1988; Goldsmith
1990; Legendre, Miyata, and Smolensky 1990a, 1990b, 1990c).
Section 3 (5)
76 Paul Smolensky and Graldine Legendre
the parse tree that maximally satisfies a set of constraints defining a Harmony func-
tion. These constraints are the grammar a harmonic grammar. (See Box 3.)
5
5 H
| a |
4
Time
4
3 0.2
3 0.2
2
12
01
00 1 1 2 2 33 44 55 66 77 8 8 9 0
Units (depth 1) Time
33 1
2
2
1
| a |
1
1
0
-10 0 1 2 3 4 5
1 2 3 0
Figure 3 Box 3
2 Principles of the Integrated Connectionist/Symbolic Architecture 77
ues from zero. The display under A B ? shows how the activation pattern on the
four depth-0 units builds up over time; the increase in activation value |a| is plotted
on the corresponding graph. The final activation pattern is (1 1 1 1)/2, the pattern for
X; the network has correctly parsed the string. As shown in the graph, during parsing
the Harmony H steadily increases to its final value (2.0).
Importantly, Harmony H can be viewed here in two complementary ways. On the
one hand, the correct output is defined as the representation that maximizes Har-
mony, given the input: this is a nondynamic, competence grammar view of H as a
static object that defines grammatical outputs. On the other hand, the dynamics of
the network that uses this grammar is a computational process that incrementally op-
timizes Harmony, steadily increasing H during processing until the final, H-maximiz-
ing state is reached. The dynamics is hill-climbing in H (Box 1:3): H specifies the
processing algorithmit provides a performance theory. Thus, H unifies these
competence and performance perspectives, the static and dynamic views of knowl-
edge of language. (In more complex situations, the dynamic process computes only a
local Harmony maximum, while the grammar designates the global Harmony maxi-
mum as the well-formed output: this competence/performance discrepancy is dis-
cussed in Section 6:2.4.)
Section 4 (7)
78 Paul Smolensky and Graldine Legendre
5From a wide range of perspectives, the relation between the computational power of neural-
network-like systems and that of Turing machines has been the topic of much literature (see, e.g.,
Smolensky, Mozer, and Rumelhart 1996; Siegelmann 1999; Hadley 2000; Cleland 2002; Copeland
2002, 2003).
Figure 3 Box 3
2 Principles of the Integrated Connectionist/Symbolic Architecture 79
Section 6 (8)
80 Paul Smolensky and Graldine Legendre
e,x.raced(e;x) & horse(x) & this can be glossed there is an entity x par-
ticipating in an event e such that e is a racing event and x is a horse, and [during
the event the horse moves from one side of a barn to the other]. Obviously oversim-
plified, but sufficient here.
Then another word, fell , is received. This word cannot be accommodated within
the current parse: the new word is a main verb and the existing parse already has a
main verb, race. Reparsing is required, with raced reanalyzed as the passive participle
of a relative clause that has been reduced: the horse [that was] raced past the barn. This en-
tire phrase is then parsed as the subject of the main verb fell : the horse [that was] fell.
This new tree structure has a new interpretation, in which the event referred to by the
sentence is an event of falling rather than of racing.
We will now walk through a fairly detailed ICS picture of this process, breaking
the discussion down into six parts. Each part consists of a diagram and a discussion
of the diagram in the context of the general questions and results that underlie the il-
lustration. The diagrams build up to the final one, Figure 9.
In addition to results achieved to date, this discussion mentions a few of the
more immediate open problems for the ICS research program. The intention is to
guide the reader by indicating not only the key results to look for in this book, but
also potential results that are not be found here.
6In this respect, ICS research is principle-centered rather than model-centered; see Section 3:2.3
and Chapter 22.
{ S
Syntax
s |
G: {SNP VP, }
}
[PARSE OB-HD ]
{
HGG: {If NP left child of S add 2 to H; }
{If violate PARSE add wPARSE to H; } ~
Syntax WG
(b)
Harmony (a)
Activation pattern
Syntax WG
tremely broad set of problems. See Chapter 12 for sources, especially the online
Rutgers Optimality Archive, http://roa.rutgers.edu.)
8.4 Functions
Utilizing the concept of well-formedness, grammars specify an important type of
cognitive function: the outputs of this function are the grammatical or well-formed
representations. The syntactic component of the mental representation of a sentence
has been our working example. Now we shift attention to the cognitive function
mapping the syntactic parse tree to its semantic interpretation. For concreteness, we
are assuming here that a semantic representation can be described at a sufficiently
abstract level as an expression of symbolic logic; equivalently, many other symbolic
semantic representations could be substituted. Of main concern for development of
ICS is the realization of such functions in connectionist networks. Figure 7 portrays
the ICS picture, with the various elements labeled in (21). A version of the general re-
search problem addressed here is formulated in (21); a few technical results and open
questions are listed in (23) and (24).
Figure 7. Computing cognitive functions
e,x.raced(e;x)
|
& horse(x) &
}
asem
Syntax
Semantics
~ asyn
lus expression)
} Symbolic interpretation function mapping Ssyn to Ssem
~ Activation pattern asyn realizing Ssyn
Activation pattern asyn realizing Ssem
Tensor product realization mapping Ssyn asyn; Ssem asem
Weight matrix W for a feed-forward network, mapping asyn to asem
Systematic mapping W between symbolic functions like and
weight matrices like W
(22) General Problem. W symbolic function
Find a class of symbolic (recursive) functions that approximates as closely as
possible the set of cognitive functions, and a systematic mapping that takes
any function in and produces a weight matrix W for a feed-forward net-
work that computes (i.e., if maps s to t, then W maps s to t, where s and t
are the activation patterns realizing symbol structures s and t according to a
given realization mapping).
(23) Results
a. Symbolic PC functions W over linear representations [Section 8:4.1.8]
For linear (tensor product) realizations of binary trees, a mapping to
weight matrices W for any in a set of recursive functions generated by
the fundamental tree-manipulating operations: the PC functions (Realiza-
tion Theorem for PC Functions)
b. Finite specification properties of a function in and the matrices W re-
alizing [Sections 8:4.1.74.1.8]
Every symbolic PC function consists of a rearrangement around the tree
root limited to a finite depth, composed with recursive propagation
throughout the unbounded tree; every weight matrix W is a finite matrix
W propagated throughout the unbounded network by a fixed, extremely
simple unbounded feed-forward recursion matrix I (Finite Specification
Theorem for PC Functions; Realization Theorem for PC Functions)
c. Local conditional branching [Section 8:4.3]
It is straightforward to implement a kind of conditional branching (if X do
Y else do Z) using a connectionist unit.
(24) Open problems
a. Extend (and its connectionist realization mapping) to a still wider class
of recursive functions.
b. Extend the realization mapping of to W to nonlinear connectionist rep-
resentations.
c. Find a realization of conditional branching that is distributed, not local.
8.5 Processing
Characterizationsat two levels of descriptionof several aspects of the syntactic
and semantic components of the mental representation resulting from processing the
horse raced past the barn have been obtained: the specification of the representation itself
in both symbolic and connectionist forms; the characterization of its well-formedness
in terms of a purely symbolic grammar, a harmonic grammar, and the weight matrix
of a recurrent connectionist network; and the specification of the function from syn-
tax to semantics as a recursive symbolic function and as the weight matrix of a feed-
forward net. We now consider the mental processes resulting from the arrival of the
final word of the sentence, fell. A step-by-step account is given in (25) and sketched in
Figure 8.
(25) Illustration (Figure 8)
Parsing; dynamics computes optimal representation, given words so far.
{ A new word arrives: fell.
| The existing parse (a) is inconsistent with the new word, so must
change; the syntactic network must shift to a substantially different
state to accommodate this word.
} The recurrent syntactic network circulates activation for some time af-
ter the arrival of fell; the activation values of its units oscillate until a
new equilibrium is established.
~ During this time the network is maximizing Harmony, the Harmony
landscape having shifted with the arrival of the new word.
The time required to process this word, and the probability of failing to
do so successfully, are determined by the Harmony maximization
process: how quickly and accurately it optimizes.
Assuming a successful optimization, when activation settles down after
processing fell, the network state will realize a different tree (b), one that
incorporates the new word.
The new tree is grammatical, since the weights WG in the network are
such that optimizing Harmony means satisfying the grammar G (see
Figure 5).
The overall transition can be described at the symbolic level as mov-
ing from one tree to another, but there is no algorithm operating on
symbolic constituents that describes the actual state transitions of the
system that mediate between the initial and final state. In other words,
there is no symbolic algorithm whose internal structure can predict
the time and the accuracy of processing; this can only be done with
connectionist algorithms.
Syntax WG Syntax WG
V
}
tk
{ fell
(b)
(a)
Connectionist
Harmony
Harmony
algorithm
~
Time
activationpattern
Activation pattern
It is not yet possible to formulate a detailed model instantiating this account, but
several results on this general problem (26) are reported in this book; they are sum-
marized in (27). (28) mentions some key outstanding problems.
(26) General problem
Find a characterization of the outcome of connectionist activation processing,
the resource requirements for the connectionist computation (time, number of
units and connections, precision) required to produce an output, and the prob-
abilities of different types of errors; relate all these characteristics to symbolic
descriptions of the computation.
(27) Results
a. In harmonic networks, activation processing maximizes Harmony, in a
8.7 Learning
How does the language-processing system depicted in Figure 9 arise? A few results
concerning the nativist hypothesis are presented in the bookalthough no commit-
ment to (or endorsement of) this hypothesis is manifest in the research program.
Three case studies are listed in (29): (29a) is experimental, (29c) is theoretical, and
(29b) is an application of the theory to spontaneous child production data.
(29) Illustrations
a. Infants knowledge of universal phonological constraints [Chapter 17]
b. Role of universal syntactic constraints in the earliest acquisition of clause
structure [Chapter 18]
c. Learning an OT syllabification grammar with genomically encoded con-
straints [Chapter 21]
Two central general learning problems of the theory are given in (30). Some results
and open problems are listed in (30)(32).
(30) General problems
a. Given the universal elements of any OT grammar, find a learning proce-
dure that determines the language-specific element: the ranking.
b. Provide a mechanism for learning the universal elements of an OT gram-
marrepresentations, constraints (including the most abstract ones)
from experience, or provide an account of how these elements may be ge-
netically encoded and how such a genome evolved.
(31) Results
a. Language-particular ranking experience with positive data
Constraint demotion algorithms can be defined; these adjust rankings so
that optimal forms match observed forms [Chapter 12].
b. Learnability initial state
General learnability arguments entail properties of the initial grammar, if
knowledge of universal constraints is innate [Chapter 17].
c. Symbolic OT universal grammatical system HG abstract neural net-
work architecture abstract genomic encoding
A demonstration system addressing learning of basic syllabification, for-
mally specified at four levels of description: (i) OT grammar, (ii) HG, (iii)
abstract (localist) neural network, and (iv) abstract genome [Chapter 21]
(32) Open questions
a. What types of OT constraints can be learned from experience, and how?
b. How, in general, do symbolic OT learning algorithms relate to neural net-
work learning?
c. Can the demonstration system be adapted to distributed representations?
In closing, we hasten to reiterate that the above outline provides only a few of the
most important open problems, ones that may fall near the limits of current results,
rather than far beyond. No implication whatever is intended that other problems,
large and small, have been solved. We would also like to repeat that the above out-
line is not expected to be understandable from start to finish on the basis of only
Chapters 1 and 2: it is intended as a reference to be consulted as the necessary con-
cepts are developed in the remaining chapters, hopefully making it easier to fill in the
big picture that has been outlined here.
past the barn PARSE OB-HD past the barn fell PARSE OB-HD
L (a) (a) *!
(b) *! L (b) *
G: PARSE OB-HD DATA
Symbolic
E Phonology: Syllables, vowel
harmony
algorithm Syntax: Voice; wh-questions;
split intransitivity; reading
times
e,x.raced(e;x) e,x.fell(e;x)
& horse(x) & & horse(x) &
Syntax WG
Semantics
tk
(b)
fell (a)
Connectionist
algorithm
Time ~ Harmony
Activation pattern
References
ROA = Rutgers Optimality Archive, http://roa.rutgers.edu
Abbott, L., and T. J. Sejnowski, eds. 1999. Neural codes and distributed representations. MIT Press.
Bever, T. G. 1970. The cognitive basis for linguistic structures. In Cognition and the development of
language, ed. J. R. Hayes. Wiley.
Chomsky, N. 1965. Aspects of the theory of syntax. MIT Press.
Churchland, P. S., and T. J. Sejnowski. 1992. The computational brain. MIT Press.
Cleland, C. E., ed. 2002. Special issue: Effective procedures. Minds and Machines 12, 157326.
Cohen, M. A., and S. Grossberg. 1983. Absolute stability of global pattern formation and paral-
lel memory storage by competitive neural networks. IEEE Transactions on Systems, Man,
and Cybernetics 13, 81525.
Copeland, B. J., ed. 2002. Special issue: Hypercomputation. Minds and Machines 12, 461579.
Copeland, B. J., ed. 2003. Special issue: Hypercomputation (continued). Minds and Machines 13,
3186.
Dolan, C. P. 1989. Tensor manipulation networks: Connectionist and symbolic approaches to
comprehension, learning, and planning. Ph.D. diss., UCLA.
Dolan, C. P., and M. G. Dyer. 1987. Symbolic schemata, role binding, and the evolution of struc-
ture in connectionist memories. IEEE International Conference on Neural Networks 1, 28798.
Eckhorn, R., H. J. Reitboeck, M. Arndt, and P. Dicke. 1990. Feature linking via synchronization
among distributed assemblies: Simulations of results from cat visual cortex. Neural Compu-
tation 2, 293307.
Geman, S., and D. Geman. 1984. Stochastic relaxation, Gibbs distributions, and the Bayesian
restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence 6, 721
41.
Golden, R. M. 1986. The brain-state-in-a-box neural model is a gradient descent algorithm.
Mathematical Psychology 3031, 7380.
Golden, R. M. 1988. A unified framework for connectionist systems. Biological Cybernetics 59,
10920.
Goldsmith, J. A. 1990. Autosegmental and metrical phonology. Blackwell.
Gray, C. M., P. Konig, A. K. Engel, and W. Singer. 1989. Oscillatory responses in cat visual cor-
tex exhibit intercolumnar synchronization which reflects global stimulus properties. Na-
ture 338, 3347.
Hadley, R. F. 2000. Cognition and the computational power of connectionist networks. Connec-
tion Science 12, 95110.
Hinton, G. E., J. L. McClelland, and D. E. Rumelhart. 1986. Distributed representation. In Paral-
lel distributed processing: Explorations in the microstructure of cognition. Vol. 1, Foundations, D.
E. Rumelhart, J. L. McClelland, and the PDP Research Group. MIT Press.
Hinton, G. E., and T. J. Sejnowski. 1983. Optimal perceptual inference. In Proceedings of the IEEE
Computer Society Conference on Computer Vision and Pattern Recognition.
Hinton, G. E., and T. J. Sejnowski. 1986. Learning and relearning in Boltzmann machines. In
Parallel distributed processing: Explorations in the microstructure of cognition. Vol. 1, Founda-
tions, D. E. Rumelhart, J. L. McClelland, and the PDP Research Group. MIT Press.
Hopfield, J. J. 1982. Neural networks and physical systems with emergent collective computa-
tional abilities. Proceedings of the National Academy of Sciences USA 79, 25548.
Hopfield, J. J. 1984. Neurons with graded response have collective computational properties
like those of two-state neurons. Proceedings of the National Academy of Sciences USA 81,
308892.
Hummel, J. E., and I. Biederman. 1992. Dynamic binding in a neural network for shape recogni-
tion. Psychological Review 99, 480517.
2 Principles of the Integrated Connectionist/Symbolic Architecture 97