Vous êtes sur la page 1sur 8

Reply to Steven Pinker So How Does The Mind

Work?
JERRY FODOR

Computers can get you speed, but of course they cant think.
Stockbroker explaining to National Public Radio why he doubts hell be replaced by a
machine.

If you really must have a defense mechanism, I recommend denial. Its special
charm is that it applies to itself, so if it doesnt work, you can deny that too.
Pinkers view is that every thing is fine in cog. sci.; or, anyhow, that everything will
be fine in cog. sci. sooner or later; or, anyhow, that its possible in principle that
everything will be fine in cog. sci. sooner or later;. or, anyhow, that I havent
proved that it isnt possible in principle that everything will be fine in cog. sci.
sooner or later. It seems to me that the first disjunct is just obviously not true; as for
the rest of them, I said nothing in my book to the contrary. That being so, this
reply to Pinker will be brief.

1. Computation
The Mind Doesnt Work That Way (TMD hereafter) didnt try to convince its reader
that computational models of cognition dont/wont work. To the contrary, it
self-consciously preached to the converted, viz to that portion of the cog. sci.
community (increasingly substantial, I think) that is already worried that something
has gone badly wrong with computational cognitive psychology and wonders what
it might be and how it might be fixed. Pace Pinker, TMD doesnt claim to have
discovered a principled reason why computational approaches will not succeed at
modeling cognition. Rather, TMD offers a diagnosis of a pattern of failures whose
self-evidence it takes to be glaring. In particular, I wanted to suggest some ways in
which intrinsic features of the Classical computational model of mind might fail to
capture crucial aspects of cognition even though Turings account of computation
underlies it, and theres a sense in which Turing machines can do anything. If its
your view that everything is OK with Classical computational psychology, then

Address for correspondence: Department of Philosophy, Rutgers, The State University of


New Jersley, New Brunswick, PO Box 270, New Brunswick, NJ08903-0470, USA.
Email: jerry.fodor@verizon.net
Mind & Language, Vol. 20 No. 1 February 2005, pp. 2532.
# Blackwell Publishing Ltd. 2005, 9600 Garsington Road, Oxford, OX4 2DQ , UK and 350 Main Street, Malden, MA 02148, USA.

26

J. Fodor

TMD offers nothing that is intended to persuade you otherwise. But I do think you
might have your glasses checked.
In what sense does Turings account of computation underlie the Classical
notion of mental processing? Or how, to put it TMDs terms, does the Classical
picture presuppose that the mind is interestingly like a Turing machine? Pinker
thinks that TMD thinks that the cognitive mind has, literally, a Turing machines
architecture; for example, that the human memory is like a tape divided into
squares. He rightly finds that view hard to credit. In fact, its so hard to credit that
it hadnt occurred to me to warn against that way of reading TMD. Ill know better
next time. There are, however, a number of passages (some of which Pinker
quotes) in which TMD says what it does think that Turing machines have interestingly in common with the kinds of computational models that Classical cog. sci.
has adopted: First, all mental processes are supposed to be causally sensitive to, and
only to, the syntax of the mental representations that they are defined over; in
particular, mental processes arent sensitive to what mental representations mean.
This is, I think, at the very heart of the Classical account of cognition, so TMD
wasand I continue to besurprised by how resolutely Pinker scants it. And
second, that mental processes are sensitive only to the local syntactic properties of
mental representations; in particular, to the identity and arrangement of their
constituents. The constituent structure of a mental representation (together with
whatever can be defined in terms of its constituent structure) is all that a Classical
machine can see when it looks at an object in the domain of its computations.
This is important because not all the syntactic properties of a symbol are, in that
sense, local; all sorts of relational syntactic properties arent, including, in particular,
global properties like being the simplest of the available solutions to a computational
problem.1 So, if you insist that cognition has a Classical architecture, you will
require local surrogates for whatever global properties of mental representations
you take to be computationally salient. That is the course that Classical cog. sci.
pursues when it proposes local heuristic processes as solutions for globality
problems. For example, at one point Pinker offers a list of the heuristics that
real people use for making investment decisions. They include: asking ones
brother-in-law, doing what some slick brochure suggests . . . and so forth. At no
point does Pinker mention, as an investment strategy, doing some thinking about
the stock market (or if youre like me, paying somebody else to do some thinking
for you.) This omission is striking but not surprising. Prima facie, thinking is the
aspect of decision making that is most responsive to global properties of belief
systems; so its the aspect of decision making that Classical cog. sci. hasnt got a clue
about.
There are those who believe that the heuristic strategy approach to cognition has
been successful, or promises to be successful in the foreseeable future. But, actually,

I assume, concessively, that global properties of a mental representations like relative simplicity
can be syntactically characterized at all. If not, then so much the worse for Classical cog. sci..

Blackwell Publishing Ltd. 2005

Reply to Steven Pinker

27

Pinker isnt among them. In fact, when he gets around to considering the problems
about globality, abduction, frames and the like (the ones that TMD offered as
prima facie objections to the Classical architecture) his response is to give up on
Turing-type, syntactically driven machines in favor of constraint satisfaction
models. Or, rather, in favor of some (currently unavailable) hybrid of the two.
Unless such hybrids are impossible in principle, which Fodor has not shown, his
arguments about the limitations of Turing machines . . . are irrelevant. Lets see.
I agued that, because Turing-style computations are intrinsically local, they have
globality problems; and that, because of their globality problems, we may have to
abandon the Classical kind of cognitive architecture in favor of some alternative,
thus-far-unspecified, model of cognition. Pinker replies that, because of their
globality problems, we may have to abandon the Classical cognitive architecture
in favor of some thus-far-unspecified hybrid about which he is prepared to claim
only that it is possible in principle. Flying pigs are also possible in principle;
possible in principle bakes no bread. Aside from that, Pinkers description of the
current situation in cog. sci. actually doesnt differ from mine: we both think
Classical architectures very likely cant model cognitive processing in general; and
we both think that nobody know what to replace them with. The difference is that
Pinker is relentlessly cheerful in face of all this bad news. Denial works!
By the way, TMD did consider constraint satisfaction architectures as ways of
coping with globality problems. The consideration was brief because I thought
(and still do) that the prima facie objections to such proposals are so obvious as not
to require much discussion. The notorious problem with constraint satisfaction is
that, insofar as it is able to achieve globality, it does so at the price of holism.
Holistic models of thought lack all sorts of properties that are strikingly characteristic of human cognition (see, for example the discussion of transportability
in TMD). The puzzle par excellence about our cognition is that it manages,
somehow, to be global but not holistic. I wish I knew how it does that, but
I dont. Nor does Pinker. Nor does anybody else.
This line of thought presupposes that there are indeed, global processes in
cognition. Well, are there? And, if there are, in what kinds of cognition are they
found? I took scientific abduction as my paradigm for globality, but Pinker doesnt
like that. He objects that science is an inherently collective (social) process, hence
quite different from what goes on in individual minds. I dont know whether, or in
what sense, this may be so. I would have thought that scientific thinking takes
place in individual scientists heads; its mostly the funding thats a social construction. In any case, Pinker offers no account of how global abduction at the social
level might emerge from local cognition at the individual level. Perhaps he intends
to eventually, but I am not holding my breath. Suppose, however, Im wrong
about this; if so, so be it. I can make do with all the cases in which AI has failed to
provide a serious account of a routine cognitive capacity; like, for example, getting
across a busy street without being run over; or figuring out where to have lunch;
or following an argument (to say nothing of inventing one), or playing
chess; or . . . etc. Simulations of cognition are, in effect, experimental tests of the
#

Blackwell Publishing Ltd. 2005

28

J. Fodor

architectural assumptions of the theories they embody. So far, when the architectural assumptions of Classical cog. sci. have been applied to the problems of
commonsense cognition, the experiments havent come out well; adjectives like
appalling spring to mind. Having said that, I should add that if Pinker really
doubts that human beings have a power of reliable abductive inference that is
better than heuristic then he should, by all means, continue his search for the
heuristics that they arent better than. Better him than me.

2. Modularity
Pinker thinks that the mind consists of several bundles of more or less specialized
problem solving systems. In fact, he thinks he might get by with some two dozen
emotions and reasoning faculties. He gives no grounds for believing this (it strikes
me as preposterous on the face of it) except that some psychologists have shown
that people automatically interpret certain patterns of moving dots as agents that
seek to help and hurt each other. This is a lovely example of how a commitment
to local and heuristic models of cognitive processesing leads to endorsing superficial
solutions for deep problems. How much of your routine behavior vis-a`-vis your
conspecifics (or vis-a`-vis your pets, come to think of it) do you actually suppose
could be recognized as social interactions by a machine that knows about nothing
except dots in motion? (Think about phoning your optometrist to make an
appointment for some time early next week. Which way do your the dots move
when you do that?). Of course, spatiotemporal trajectories are not the only or
even the primary way that people recognize cognitive domains such as social
exchange. Right. What are the modular proposals for the other ones?
Its ever so easy, in the excitement of heuristic programming, to loose track of
how very, very abstracthow distant from raw psychophysicscategories like
social interaction really are. And its all the easier if one assumes, in the spirit of
closet empiricism, that cognitive categories must eventually reduce to perceptions,
and perceptions to sensations. If you find yourself coming down with this
propensity, I strongly recommend remedial reading in Henry James.
Modularity is quite a good thing; I have myself endorsed it from time to time.
But there are two traditional, prima facie objections to an architecture of massive
modularity; i.e. to a mind thats made of modules alone. (1) How could such an
architecture be sensitive to global properties of belief systems (see above); and (2)
Who puts all the modules outputs together to produce integrated beliefs? Pinkers
suggestion is, as far as I can tell, entirely without content: the mind is a network of
subsystems that feed each other in criss-crossing but intelligible ways. That is to say
no more than: it must happen somehow or other. The breeze you feel is as of
hands being waved.
The trouble, I think, is that Pinker hasnt grasped the difference between the
highly tendentious empirical thesis that cognitive mechanisms are typically modular,
#

Blackwell Publishing Ltd. 2005

Reply to Steven Pinker

29

and the merely metaphysical thesis that they are typically functionally individuated.
For better or worse, the latter claim is generally not at issue in cog. sci. discussion;
the functional (as opposed to, say, the neurological) individuation of the mental is
the standard, common-ground ontology of the discipline (except on the West
Coast). Functionalism could be wrong, of course; but its orthogonal to the issues
about modularity, which largely concern encapsulation and domain specificity,
and which remain wide open whether or not its assumed that mental individuation is
functional. Correspondingly, massive modularity is the claim that cognitive processes are effected, more or less exhaustively, by encapsulated, domain specific
mechanisms. Pinker says that, for him, modularity is about functional individuation
(and not encapsulation, domain specificity and the like). But I doubt that he means
what he says. If he does, then his usage is very eccentric. For example, according to
that way of talking, a Turing machine would be a massively modular processor so
long as tape, scanner and the like are functionally defined; indeed, merely to
draw a flow chart is to endorse a modular architecture as Pinker understands the
notion, since the boxes in flow are labeled by specifications of their functions.
Confusions of modularity with functional individuation have embarrassed the cog.
sci. literature for several decades now; it really is time to stop. All modules are
boxes, but not vice versa.
Because the modularity thesis (massive or otherwise) is an empirical claim about
how cognition actually works, its plausibility depends on how well it accounts for
the empirical properties that cognition actually exhibits; in particular, on how well
it accounts for the globality and integration of belief systems. So far as I know,
theres only one relevant proposal in the massive modularity literature: global
integration is achieved by heurisitic approximations (see above). As a philosophy
friend of mine likes to say at this sort of juncture: believe it if you can. I cant.

3. Truth
Its usual for massive modularity theorists to claim that there is no domain-neutral
characterization of the teleology of (cognitive) processes as such; rather, what
cognition aims at is just the union of whatever the different cognitive modules
aim at. The traditional alternative is that cognition aims at truth; truth is, as it were,
cognitions proprietary virtue. Pinker doubts that this latter view can be right. After
all, he reminds us, nobody is interested in truth per se; the truths that creatures care
about are ones that successful actions turn on; and, by assumption, those are the
truths that modules are able to detect.
That sort of thing gets said again and again in the massive modularity literature,
but it relies on a false dilemma. Right: creatures want to act successfully. But thats
quite compatible with their achieving what they want by a division of mental labor
according to which cognitive processes are specialized to deliver truths and decision
processes are specialized for figuring out what to do in the sort of world that
cognition reports. It is simply not a reply to this proposal to remark that there are
#

Blackwell Publishing Ltd. 2005

30

J. Fodor

many truths we dont care about, or that we have many false beliefs; or even that
we have many false beliefs in which we stubbornly persist. The claim at issue is
about what cognition aims at; its about what cognition contributes to the overall
process of choosing and integrating actions. But theres a lot to more to deciding
how to act than what cognition contributes; it has to be sensitive to considerations
of prudence, relevance, computational cost, probable pay off and so forth. It is, to
be sure, possible to imagine a kind of architecture in which the mechanisms that
find out how things are arent distinct from the ones that decide what to do; thats
precisely how a reflex works. Quite possibly, distinguishing thought from action is
the cost of a minds doing better than a reflex (see Hamlet for discussion). Truth
alone doesnt determine the course of action; how could it? But that doesnt deny
that cognition per se is interested in truth per se. That man does not live by bread
alone isnt a reason to doubt that there are bakers.
This line of argument has been offered about a zillion times against the sort of
pragmatism about cognition that Pinker takes for granted. I know of no response
on behalf of the pragmatists. I, for one, am tired of making it and I now propose to
cease to do so. Enough is enough.

4. Consilience
TMD agrees that cognitive psychology might have interesting implications for
brain science (some day), and even that brain science might have interesting
implications for cognitive psychology (some day). What it denies is that consilience
(mutual relevance) is an a priori condition for the adequacy scientific theories. (If
Pinker doesnt think that there are people who think that it is, he should read the
eponymous book.) Pinker cites the increasing proliferation of hyphenated scientific
disciplines as evidence that (what used to be called) the unity of science program
is still alive and well. Maybe. Or maybe it just shows that Deans have noticed that
hyphens get grants.

5. Selection
Pinker thinks that TMD thinks that the logical independence of the notion of
biological function from the notion of Darwinian selection is somehow in and of
itself an argument for accepting a selectionist account of the former. But it isnt.
What it argues is, just as Pinker says, that natural selection is a falsifiable scientific
explanation of how biological functionality arises. That being so, its pertinent to
ask for evidence that natural selection is (ever/sometimes/always) the
right explanation of biological functionality. In particular, TMD argued, its
pertinent to ask for evidence specific to selectionist accounts of the functionality of cognitive
mechanisms since there are plausible reasons why their phylogeny might be different
from that of other sorts of heritable phenotypic traits. Pinker declines to provide
#

Blackwell Publishing Ltd. 2005

Reply to Steven Pinker

31

such arguments. Rather, he announces, just by fiat, that the only alternative to
a selectionist account of adapative fitness, cognitive or otherwise, is deliberate
engineering by a diety or extraterrestial; some kind of mysterious teleological
force . . . . Hence, from a scientists perspective functionality without natural
selection is unacceptably incomplete. Which scientist, one wonders? And whence
Pinkers license to legislate, in respect of these hard questions, what is to count as
the scientifically acceptable?
This is a long argument, and nobody knows how it will finally turn out. Suffice
it to get clear whats at issue. Pinker claims, perhaps without quite understanding
that he does so, that the phylogeny of adaptive complexity must involve the
operation of feedback. Thats because natural selection can see those effects, and
thereby can shape, over generations, just those developmental variations that
enhance them. Irony abounds here. Fifty years or so ago, exactly that was claimed
about the adaptive complexity of learned behaviors: Consider, so the story went,
the complexity of language; either its a miracle or it must be the consequence of
meticulous shaping by socially mediated reinforcement. In the event, however,
the options learning theory or theology proved not to be exhaustive. It seems that
there are cases where behavioral adaptivity is achieved not by shaping but by
throwing a switch (setting a parameter as one says these days). There are, to be
sure, constraints that decide what behavioral phenotype a creature eventually
exhibits. But they arent primarily the external constraints that the environment
imposes on the success of its behavior. Rather, theyre internal constraints
that biology imposes on the range of behavioral phenotypes that a kind of creature
can exhibit. So, anyhow, the new story goes. Whats clear, in retrospect, is that
such matters cant be settled a priori; theyre empirical, not methodological. In
particular, they are not to be settled by rhetorical claims about what constitutes
respectable science or which explanations are to count as complete. NOBODY
GETS TO PREMPT THE SCIENTIFIC METHOD; NOBODY GETS TO
PREMPT THE SCIENTIFIC WORLD VIEW. Psychology isnt an a priori
inquiry; what it wants is not respectability but truth.
6. So, how does the mind work?
I dont know. You dont know. Pinker doesnt know. And, I rather suspect, such
is the current state of the art, that if God were to tell us, we wouldnt understand
him.
Afterthought
I almost forgot: TMD said that frame problem doesnt appear in Pinkers index. That
was untrue, and I hereby blush for having said it. In fact, there are two entries for
frame problem in Pinkers index (Star Trek, by comparison, gets seven). Here is
the complete text: Only when artificial intelligence researchers tried to duplicate
#

Blackwell Publishing Ltd. 2005

32

J. Fodor

common sense in computers, the ultimate blank slate, did the conundrum, now
called the frame problem come to light. (p. 15, Ch. 1) Any true statement can
spawn an infinite number of true but useless new ones. (This is an example of the
frame problem introduced in Chapter 1) (p. 335, Ch..5).
Rutgers University
New Brunswick, NJ

Blackwell Publishing Ltd. 2005