Vous êtes sur la page 1sur 20

Philosophical Explorations

An International Journal for the Philosophy of Mind and Action

ISSN: 1386-9795 (Print) 1741-5918 (Online) Journal homepage: http://www.tandfonline.com/loi/rpex20

Predictive minds and small-scale models: Kenneth


Craik’s contribution to cognitive science

Daniel Williams

To cite this article: Daniel Williams (2018) Predictive minds and small-scale models: Kenneth
Craik’s contribution to cognitive science, Philosophical Explorations, 21:2, 245-263, DOI:
10.1080/13869795.2018.1477982

To link to this article: https://doi.org/10.1080/13869795.2018.1477982

Published online: 02 Jul 2018.

Submit your article to this journal

Article views: 47

View Crossmark data

Full Terms & Conditions of access and use can be found at


http://www.tandfonline.com/action/journalInformation?journalCode=rpex20
Philosophical Explorations, 2018
Vol. 21, No. 2, 245–263, https://doi.org/10.1080/13869795.2018.1477982
This paper is part of the special issue “Enactivism, Representationalism,
and Predictive Processing” guest edited by Krzysztof Dołęga,
Luke Roelofs, and Tobias Schlicht.

Predictive minds and small-scale models: Kenneth Craik’s contribution


to cognitive science
Daniel Williams *

Faculty of Philosophy, University of Cambridge, Trinity Hall, UK


(Received 14 May 2018; final version received 14 May 2018)

I identify three lessons from Kenneth Craik’s landmark book “The Nature of
Explanation” for contemporary debates surrounding the existence, extent, and nature
of mental representation: first, an account of mental representations as neural
structures that function analogously to public models; second, an appreciation of
prediction as the central component of intelligence in demand of such models; and
third, a metaphor for understanding the brain as an engineer, not a scientist. I then
relate these insights to discussions surrounding the representational status of
predictive processing – which, I argue, provides a contemporary vindication of
Craik’s extremely prescient “hypothesis on the nature of thought.”
Keywords: predictive processing; predictive coding; enactivism; mental representation;
craik; representation wars

1. Introduction

… the fundamental feature of neural machinery – its power to parallel or model external
events. (Craik 1943, 52)

Are there such things as mental representations? If so, how pervasive are they? What form
do they take?
These questions define what might reasonably be called the “representation wars” in
cognitive science and philosophy, a divisive and seemingly intractable set of debates con-
cerning the existence, extent, and nature of mental representation (Clark 2015; Williams
2017). In this paper, I articulate and defend a theoretical framework for approaching
such questions first advanced in Kenneth Craik’s (1943) extremely prescient but little-
known “hypothesis on the nature of thought.” In a slogan, the hypothesis was this: the
mind is a predictive modelling engine. Specifically, Craik (1943) argued that “one of the
most fundamental properties of thought is its power of predicting events,” which gives it
its “immense adaptive and constructive significance” (50), and which is made possible
because an organism “carries a ‘small-scale model’ of external reality and of its own poss-
ible actions within its head” (61) – a small-scale model “which has a similar relation-struc-
ture to that of the process it imitates” (51).1

*Email: dw473@cam.ac.uk

© 2018 Informa UK Limited, trading as Taylor & Francis Group


246 Daniel Williams

In an oft-quoted passage, he speculated:

If the organism carries a “small-scale model” of external reality and of its own possible actions
within its head, it is able to try out various alternatives, conclude which is the best of them, react
to future situations before they arise, utilise the knowledge of past events in dealing with the
present and future, and in every way to react in a much fuller, safer, and more competent
manner to the emergencies which face it. (Craik 1943, 61)

There are at least two things that make Craik a fascinating figure to explore in the context of
the contemporary representation wars. First, he wrote in a period of maximal scepticism
about the concept of mental representation both in psychology (Skinner 1938; Watson
1913) and philosophy (Ryle 1949; Wittgenstein 1953). Drawing on insights from pragma-
tism and control theory, Craik’s hypothesis was thus deeply heretical in the intellectual
climate in which he advanced it (Gregory 1983). It is interesting, then, to consider the
motivation for this heresy in a context in which scepticism about internal representation
in psychology is on the rise once again (Anderson 2014; Chemero 2009; Hutto and
Myin 2013; Ramsey 2007). Second, recent years have seen an explosion of work in both
philosophy and the cognitive sciences that rediscover many of Craik’s insights: the central-
ity of prediction to intelligence (Barr 2011; Bubic, von Cramon, and Schubotz 2010; Llinás
2001); an understanding of mental representations as structural models (Cummins 1989;
Gładziejewski 2015; Grush 2004; Ryder 2004; Shea 2014; Williams and Colling 2017);
and an increased focus on regulation and predictive control as principles of brain function-
ing (Barrett 2017; Sterling and Laughlin 2015). Nowhere is this more evident than in pre-
dictive processing (Clark 2016; Friston et al. 2017; Hohwy 2013), a recently emerging
framework in cognitive neuroscience that presents a radical contemporary embodiment
of Craik’s understanding of intelligence (see Section 5). It is interesting, therefore, to
explore how Craik himself understood the implications of the “predictive mind” (Hohwy
2013).
Nevertheless, Craik’s work is not merely of historical interest. Cognitive science and
philosophy are belatedly discovering insights from that work, but they would benefit
from discovering more – or so I argue here. Specifically, I do two things. First, I
outline three insights from Craik’s work as they apply to contemporary debates sur-
rounding mental representation: first, an understanding of mental representations as
neural structures that function analogously to public models (Section 2); second, an
appreciation of prediction as the central component of intelligence in demand of such
models (Section 3); and third, a metaphor for understanding the brain as an engineer,
not a scientist (Section 4). Second, I relate these insights to contemporary discussions
surrounding the representational status of predictive processing (Section 5). I argue
that Craik offers an attractive middle-path between the view that the sub-personal mech-
anisms in predictive brains implicate no internal representations (e.g. Anderson and
Chemero 2013; Bruineberg, Kiverstein, and Rietveld 2016; Hutto 2017) and the view
that they implicate truth-evaluable hypotheses about states of the world (e.g. see
Clark 2013; Hohwy 2013; Kiefer and Hohwy 2017).

2. “Small-scale models”
The first insight that I will attribute to Craik is this:

(Insight #1) Mental representations are neural structures that function analogously to public
models.
Predictive Minds and Small-Scale Models 247

As noted in Section 1, this model-based view of mental representation is experiencing


something of a renaissance in contemporary philosophy (Cummins 1989; Gładziejewski
2015; Grush 2004; Ryder 2004). Nevertheless, there are features of Craik’s account of
mental representation that distinguish it from most of these recent incarnations. To
unpack these differences and extract their theoretical significance, it will be useful to
situate Craik’s work in the context of maximal scepticism about mental representation
that prevailed in both psychology and philosophy when he wrote “The Nature of
Explanation.”

2.1. Anti-representationalism in behaviourism


In psychology, the dominant (although by no means exclusive) research programme –
especially in North America – was scientific behaviourism as practiced by figures like
John Watson and B.F. Skinner. The defining feature of this research programme was its
commitment to explaining all animal behaviour in terms of classical and operant condition-
ing and the associative mechanisms inside animals’ nervous systems that mediate such con-
ditioning (Skinner 1938; Watson 1913). It is often said that what most sharply differentiates
this approach to intelligent behaviour from that which superseded it was its wholesale rejec-
tion of internal representations in psychological explanations (e.g. Bechtel 2008, 159).
What motivated this rejection?
It would be impossible to do justice to the many reasons underlying the behaviourist
antipathy to representations here. Nevertheless, I think that one can identify the following
three general sources of scepticism as the most significant driving forces underlying that
antipathy.
The first was a simple consequence of its commitments concerning the mechanisms
underlying behaviour: if all behaviour is a function of conditioning, mental representations
are unnecessary (see Section 3). This account of behaviour was itself motivated by broader
theoretical commitments, however. The most important of these – the second source of
scepticism – was the conviction that content-bearing mental states are somehow scientifi-
cally disreputable. The arguments for this claim were heterogeneous and not always coher-
ent (Dennett 1978, 53–70). The core idea, however, was that the broadly positivist
methodology underlying behaviourism required the elimination of unobservable entities
and sources of justification such as folk psychology and introspection as guides to the
mechanisms underlying intelligence (Watson 1913). Given that mental representations
seem to be unobservable entities par excellence, and seemed to receive most of their
support throughout the history of philosophy and psychology from a mixture of publicly
inaccessible introspective data and folk-psychological reflection, they were thus the most
conspicuous casualty of behaviourism’s methodological strictures.
These two considerations interacted with a third source of scepticism, which was broadly
conceptual. On this view, populating the mind (or brain) with mental representations invites
either regress or incoherence. Specifically, because representations require agents to interpret
them, postulating mental representations requires postulating an in-the-head agent to go along
with them – mental homunculi that would have to be as intelligent as the very behaviours that
psychologists were trying to explain. As Skinner (1971, 200) put it, “Science does not dehu-
manize man; it de-homunculizes him … Only by dispossessing him can we turn to the real
causes of behavior” (my emphasis). Far more sophisticated forms of this general worry
were advanced by both Ryle (1949) and Wittgenstein (1953), both of whom sought to under-
mine the idea that content-bearing mental states should be thought of as causally efficacious
internal states underlying intelligent behaviour.
248 Daniel Williams

Craik must have been aware of these critiques of mental representation. He was a
trained psychologist and philosopher at the University of Cambridge where Wittgenstein’s
influence dominated. Nevertheless, he evidently thought that non-representational psychol-
ogy was bereft of the theoretical tools to explain core characteristics of intelligence. As
such, he must have thought that such criticisms missed their mark – that postulating
mental representations can be conceptually coherent, scientifically reputable, and necessary
for the causal explanation of intelligence.

2.3. Mental models as functional analogues of public models


To understand why Craik thought these things, it will be useful to decompose the insight
outlined above into two component claims. The first answers a deep question: what does
it mean to attribute mental representations? Specifically, under what conditions is this attri-
bution legitimate? Craik’s work provides at least an implicit answer to this question:

(Insight #1.1) There are mental representations if the structures implicated in the production of
our psychological capacities function comparably to public representations.

Craik takes it as given that the mechanisms underlying intelligent behaviour and our mental
capacities are neural (60). Like the behaviourists, he does not take seriously the possibility
that scientific psychology could be founded on any form of dualism. If there are mental rep-
resentations, then, they must be components of such mechanisms. What kind of com-
ponents? Craik’s answer is disarmingly simple: they will be neural structures that
perform similar functional roles to the public representations that we are familiar with
from engineering and physical science – specifically, the public representations that help
us to predict and control our environments.2 As Craik (61) puts it:

Most of the great advances of modern technology have been instruments which extended the
scope of our sense-organs, our brains or our limbs. Such are telescopes and microscopes, wire-
less, calculating machines, typewriters, motor cars, ships and aeroplanes. Is it not possible,
therefore, that our brains themselves utilise comparable mechanisms to achieve the same
ends and that these mechanisms can parallel phenomena in the external world as a calculating
machine can parallel the development of strains in a bridge? (my emphasis)

In other words, Craik reasons by analogy. First, he argues that we can only successfully
predict and control our environments effectively by the construction and manipulation of
public representations – that this is the “one way that ‘works’ … with which we are familiar
in the physical sciences” (53). Given how central prediction and predictive control are to
adaptive behaviour more generally, then (see Section 3), Craik infers that neural mechan-
isms must function analogously to such public representations – that “it does not seem over-
bold to consider whether the brain does not work in this way – that it imitates or models
external processes” (53, my emphasis). As Gregory (1983, 236) puts it, “Craik supposes
that hundreds of millions of years ago brains incorporated technologies we have invented
over the last few centuries.”
In one sense, there is nothing original about this part of Craik’s approach. The idea of under-
standing mental goings-on by analogy to public technologies of the time is plausibly as old as
philosophy itself. Indeed, Aristotle’s (De Anima [1984]) suggestion that we understand the
mind-world relation by analogy with the imprint that a signet ring leaves on wax to mark
the provenance of a letter is a perfect – if extremely primitive – example of this approach.
Further, at first pass, this suggestion looks hopeless as a response to the conceptual worries
Predictive Minds and Small-Scale Models 249

about mental representation raised above: don’t public representations essentially require
interpretation? As such, how can positing in-the-head analogues of public representations help?
To understand Craik’s answer to this question, one must turn to the kind of public rep-
resentation that he draws on as the appropriate analogy for understanding mental represen-
tation – namely, models.
Historically, the two canonical accounts of mental representation appealed to the fol-
lowing analogies: pictures or images (e.g. Aristotle; Locke; Berkeley) on the one hand,
and arbitrary symbols of the sort that feature in mathematics and formal logic on the
other (e.g. Hobbes; Leibniz; Boole) (see Waskan 2006). Whilst Craik does pay some atten-
tion to symbols, the main kind of representational media that he focuses on are physical
models. For example, his favourite example is Lord Kelvin’s tide-predictor, designed in
the late nineteenth century (and then improved upon into the twentieth century) to
predict the behaviour of sea tides (50). Unlike pictures or images, such models need not
look anything like the domains or processes that they represent. On the other hand, they
are clearly not sets of sentences or propositions either. Instead, models are forms of rep-
resentation that capitalise on structural similarity to their targets (Giere 2004; Godfrey-
Smith 2006b; Ryder 2004).3 As Craik (51) puts it, a model is “any physical or chemical
system which has a similar relation-structure to that of the process it imitates.”

By “relation-structure” I do not mean some obscure non-physical entity which attends the
model, but the fact that it is a physical working model which works in the same way as the
process it parallels … Thus, the model need not resemble the real object pictorially; Kelvins’
tide-predictor, which consists of a number of pulleys on levers, does not resemble a tide in
appearance, but it works in the same way in certain essential respects … (51).

It is this emphasis on the function and characteristics of models that leads directly to the
second component of Craik’s first insight and that enables Craik to answer the foregoing
worries about interpretation. Specifically, Craik’s view is that it is possible to isolate the
core functional profile of “physical working models” in abstraction from considerations
concerning content and interpretation:

(Insight #1.2) The defining functional characteristic of public models is the exploitation of
structural similarity to guide prediction and control.

As such, exploiting the similarity – the shared “relation structure” – between a physical
structure and a domain to effectively predict and control the behaviour of that domain
just is to model that domain. Nothing more is required.
As noted above, this emphasis on the exploitation of structural similarity makes Craik
an important precursor of a structuralist approach to mental representation that has recently
received a large body of support in philosophy (Cummins 1996; Gładziejewski 2015; Shea
2014; Williams and Colling 2017). In line with this recent work, Craik thought that mental
representation occurs when the brain exploits the structural similarity between its internal
models and the world to guide prediction and intervention. However, if I am right, there
is potentially an important difference as well. Specifically, Craik did not appeal to structural
resemblance to answer what Von Eckardt (2012) calls the “content determination problem”:
namely, the problem of explaining what determines the contents (i.e. satisfaction con-
ditions) of mental representations. Rather, Craik evidently thought that the exploitation
of structural similarity in the service of prediction and intervention just is mental represen-
tation, irrespective of questions concerning content.4
250 Daniel Williams

There are three considerations that weigh heavily in favour of this interpretation.
First, in Craik’s treatment of mental models, there is no discussion of content as contem-
porary philosophers would understand it, and he explicitly states – as noted above – that a
model is just “any physical or chemical system which has a similar relation-structure to that
of the process it imitates” (51). Craik is concerned with how a specific kind of mechanism
facilitates a specific practical goal – namely, prediction (see below) – and it is clear that he
thinks the mechanism just is structural similarity exploited in the service of prediction.
Given that appeals to content are supposed to codify the widespread intuition that represen-
tations must be answerable to the world – that they must be capable of misrepresentation in
virtue of possessing satisfaction conditions that might not be satisfied (Fodor 1987;
Rescorla 2016) – Craik’s account of mental representation effectively disavows this intui-
tion. Interestingly, a similar view has recently been defended by Isaac (2013, 701), who
writes that “I do not take the possibility of misrepresentation as constitutive of represen-
tation. Rather, the necessary feature for a structure to constitute a representation is that it
present again information about its target.” A structural model is thus a representation
“in the very intuitive sense of the re-presentation of information” (Isaac 2013, 692; see
also Cummins 1996; Williams 2018a).
Second, Craik is wholly unconcerned with representational success (see Section 2.3) or
what might be thought of as representational normativity – that is, evaluating a represen-
tation as a representation. His focus is entirely on how models facilitate practical functions
– for example, predicting and controlling the tides, or firing down hostile aircraft. As Horst
(2016, 165) puts it,

Craik’s interest [in mental models] … seems to have been not with semantics … but with adap-
tive cognition: if structural similarities between a model and the system modelled are important, it
is because this allows the model to function as a mental surrogate for the target system …

Third, and most importantly, identifying the core functional role of public models in
abstraction from content would make it clear for Craik’s contemporary audience how his
approach could answer the kinds of anti-representationalist arguments outlined above.
Put simply, Craik’s idea is that the essence of mental representation is re-presentation,
not judgement: one structure representing the “relation structure” of a domain or process
in a form that is “cheaper, or quicker, or more convenient” (51), and can thus be used
for effective prediction. If this is right, one need not explain how such mental models
come to make claims about how the world is, and thus one need not rely on a homunculus
to interpret such models as claims about how the world is. If all there is to mental represen-
tation is the reliance on structural similarity to guide prediction and control, worries about
homunculi simply evaporate.5
If this is right, the upshot of Craik’s approach to mental representation is impressive. It
shows how the attribution of mental representations is coherent, consistent with material-
ism, and scientifically reputable, and it offers a straightforward way of operationalising
debates about the existence and extent of mental representation in a form that science
can be brought to bear on: are there things inside the head that enable organisms to effec-
tively anticipate the behaviour of different domains by recapitulating the structure of those
domains?
One might object to the foregoing proposal by simply putting one’s foot down and
declaring that any talk of models or representation without content is conceptually incoher-
ent. As noted above, it is something of a dogma in contemporary philosophy that represen-
tation requires misrepresentation and my interpretation of Craik’s insight would contradict
Predictive Minds and Small-Scale Models 251

this view. I doubt that this dogma captures anything like an analytic truth, however. Rather,
I think that it focuses on just one form of representation – namely, something like judge-
ments (states that make a claim that certain conditions obtain) – and then mistakenly gen-
eralise the properties of such representations to all representations (Godfrey-Smith
forthcoming; Isaac 2013; Williams 2018a). Nevertheless, nothing much hinges on the ter-
minology here. If one wants, one can simply stipulate that models require content. If so, one
could replace the foregoing talk of models with talk of “schmodels” and Craik’s hypothesis
would be just as interesting: an organism’s ability to predict and control its environment is
dependent on neural structures – schmodels – that recapitulate the structure of environ-
mental domains and processes. Nevertheless, this does not strike me as a useful stipulation.
Specifically, it obscures the important functional similarities between what goes on within
the head and the public models with which we are familiar.6
Of course, we do evaluate models. We ascribe contents to them and use them to
make judgements. Terms like “truth” or “falsity” are generally inapt for models, but
we do nevertheless evaluate their accuracy – a continuous variable that tracks something
like the degree of correspondence between the model and a domain (Giere 2004;
Godfrey-Smith 2006b). Nevertheless, the fact that we evaluate models in this way
does not mean that what it is to be a model is have accuracy conditions. We should
be open to the possibility that evaluating representations as representations – that is,
evaluating “representation for representation’s sake” (Wilson 2002) – is a relatively
recent cultural practice among language-using hominids, and that the main function of
models throughout most of evolutionary history is purely pragmatic – to enable organ-
isms to cope with their environments by re-presenting their structure in a form that can
be exploited to guide prediction and control (see Williams 2018a).

2.5. Why model the world?


This understanding of representation is heretical in the context of contemporary discussions
of mental representation. Nevertheless, for the reasons just enumerated, I think that it has
several virtues. Indeed, in Section 5, I argue that it provides a neat way of accommodating
what is important on both sides of contemporary debates surrounding the representational
status of cognition.
Before that, however, I turn to a different question. Suppose that one grants that there
could be such things inside the head that represent the relational structure of worldly
domains. What is the value of such models? Why should we think that they play the exten-
sive role in intelligence that Craik alleges? Certainly, the reigning behaviourist approach of
Craik’s day had no need for them (see below). Craik is exquisitely sensitive to this worry:

Some may object that this reduces thought to a mere “copy” of reality and that we ought not to
want such an internal “copy”; are not electrons, causally interacting, good enough? (81–82)

As should be clear from the foregoing remarks, Craik’s answer to this question is simple: we
need an internal “copy” of reality because we need to predict:

The answer, I think, is … that only this internal model of reality – this working model – enables
us to predict events which have not yet occurred in the physical world, a process which saves
time, expense, and even life. (82)

This brings us to the second of Craik’s major insights.


252 Daniel Williams

3. The predictive mind


Craik’s claim that the brain contains small-scale models of external reality is motivated by a
commitment to the importance of prediction to intelligence. Specifically, Craik argues that
“one of the most fundamental properties of thought is its power of predicting events,” and it
is this property which gives it its “immense adaptive and constructive significance” – an
insight he credits to “Dewey and other pragmatists” (50, my emphasis). This predictive
capacity, he notes, is “not unique to minds, though no doubt it is hard to imitate the flexi-
bility and versality of mental prediction” (51). Given that such predictive capacities are
always subserved by models – or, as Craik puts it, “this is one way that ‘works,’ in fact
the only way with which we are familiar in the physical sciences” (53) – he reasons that
our predictive capacities are themselves subserved by internal models.
What is Craik’s evidence that prediction is so central to adaptive success? Here I think
his reasoning is effectively abductive and takes two forms. The first consists of reflecting on
why predictive capacities would be adaptively advantageous, which leads him to conclude
that thought exhibits such capacities on the grounds that it is so adaptive (61). The second
consists of analogies with modelling in engineering and the physical sciences. Craik evi-
dently thinks that our extended capacity to control our environments through an ability
to predict their behaviour with modern technology must just be an extension of how organ-
isms in general cope with and control their environments (see Section 4).
This emphasis on prediction was at odds with the behaviourism of his day (see below).
Nevertheless, as Craik himself points out (referencing Dewey, 50), it was not unique to him.
What plausibly was original to him was the use of the analogy with prediction in physical
science and engineering to explain why such predictive capacities must be underpinned by
mental models.7 The second insight that I think can be extracted from Craik, then, is this:

(Insight #2) Cognition is representational in proportion to its reliance on prediction.

3.1. From prediction to representation


How reasonable is the inference from prediction to representation? Superficially, at least,
there is something compelling about it. Non-representational frameworks in psychology
have often been associated with highly reactive accounts of intelligence. In behaviourism,
for example, an organism’s behaviour is a function of previous associations between stimuli
and responses, such that the locus of behavioural control is effectively the environment
itself (Skinner 1938). Further, non-representational movements in the late twentieth
century of the sort associated with Brooks (1999) have often advanced what Anderson
(2003, 97) calls “highly reactive models of perception, bypassing the representational
level altogether” (my emphasis).
In addition, one might bolster Craik’s argument by appeal to more recent philosophi-
cal considerations. A common idea in the literature, for example, is that “representation-
hungry” (Clark and Toribio 1994) tasks are tasks in which an intelligent system must
coordinate its behaviour with states of affairs to which its access is restricted – either
because they are spatiotemporally distant, or because the relevant phenomena do not
exist. Clearly, future states of affairs fall into this category: an intelligent system
cannot interact “directly” with such events. If a system must coordinate its behaviour
with predicted outcomes, then, it is natural to think that it must exploit some internal
proxy for the domain generating such outcomes. As Craik puts it, an internal model is
“a kind of distance-receptor in time, which enables organisms to adapt themselves to situ-
ations which are about to arise” (7).
Predictive Minds and Small-Scale Models 253

Despite such attractions, I think that any simple inference from the role of prediction in
mental life to internal modelling is untenable.
First, it is crucial to distinguish predictions as the sorts of things that people paradigma-
tically engage in – that is, truth-evaluable judgements about future events – from merely
predictive or anticipatory behaviour (cf. Anderson and Chemero 2013). In the first case,
the idea that prediction relies on internal representation is trivial; after all, the prediction
itself just is a representation. In the current context, this point is important. I argued
above that Craik’s view is that we can approach fundamental questions concerning
mental representation independently of issues concerning content. Further, he clearly is sen-
sitive to the personal/sub-personal distinction. His aim is to generalise from the personal
case to the sub-personal mechanisms inside our heads. If this is right, it must be the
latter kind of prediction – predictive behaviour – that he has in mind as the relevant motiv-
ation for positing internal models. But – at least if such predictive behaviour is simply a
matter of tracking correlations – it is not obvious that this kind of behaviour does require
representation (Anderson and Chemero 2013).
To see this, consider the second problem for any simple inference from prediction to
representation: there is a perfectly straightforward sense in which the non-representational-
ist behaviourist approach of Craik’s day could explain predictive capacities or “knowl-
edge,” even if behaviourists would not have described it this way. In classical
conditioning, for example, a dog learns to associate the sound of a bell with the presence
of food. After a while, we might say that she has acquired predictive knowledge: upon
hearing the bell, she predicts food. Likewise with operant conditioning: your dog gives
you a cuddle when you get home and you give her a treat; she barks at the neighbour in
the morning and you slap her on the nose. After a while, she learns what outcomes –
rewards and punishments – her behaviour predicts. Given this, does the dog also acquire
a small-scale model of external reality and of her possible actions? One can imagine
trying to capture this “knowledge” in terms of a “directed graphical model” (Pearl 2009)
in which a child variable (i.e. <food>) is causally dependent on a parent variable (i.e.
<bell>). We might say that this model “recapitulates” the causal structure of the environ-
ment in a manner that can guide action: when the dog hears the bell, it prepares itself for
the intake of food.
Clearly, however, this way of talking is grossly misleading. It conveys no additional
explanatory advantage at all. Behaviourists were right that such forms of conditioning do
not require mental representations.8 The dog’s behaviour has become sensitive to corre-
lations between environmental stimuli and responses. End of story. Given this, perhaps
Craik’s putative insight is not an insight at all.

3.2. Prediction as counterfactual competence


This objection helps to bring out something important about Craik’s understanding of pre-
diction, which is this: he does not infer internal models from the mere ability to track cor-
relations, but from a kind of predictive capacity that I will call counterfactual competence.
An intelligent system that exhibits counterfactual competence with respect to the behaviour
of a domain does not merely learn associative relations between elements of that domain as
a function of previous experience. Rather, it acquires a capacity to predict how it would
change under a large set of possible conditions and interventions.9 That is, it models the
generative structure of that domain.10
Evidence for this interpretation can be seen both in the analogies that Craik draws upon,
and in his discussion of the kinds of predictive capacities that intelligent agents exhibit.
254 Daniel Williams

With respect to the former, Craik’s understanding of prediction in the public domain bears
no resemblance to the kind of “predictive knowledge” engendered through conditioning just
outlined. Specifically, Craik argues that scientific and engineering models that help us build
bridges, shoot down aircraft, or predict the sea tides do not involve learning through a reactive
series of rewards and punishments. As he puts it, building models enable “bridge design … to
proceed on a cheaper and smaller scale than if each bridge in turn were built and tried by
sending a train over it, to see whether it was sufficiently strong” (52). Specifically, successful
model-building in science and engineering does not merely track associative relationships
between previously encountered stimuli and outcomes but affords us a grip on the relevant
structure of a domain (Lake et al. 2016). A tide-predictor, for example, is no good if it
merely functions as a look-up table associating previous inputs with outputs; it must represent
the relevant structure of the world sufficiently well that it can generate outputs for a large set
of (previously unencountered) possible inputs.
Further, it is this counterfactual competence that Craik thinks is central to thought. As he
puts it, “in the particular case of our own nervous systems, the reason why I regard them as
modelling the real process is that they permit trial of alternatives” (52, my emphasis).
Specifically, our nervous systems confer upon us an ability not just to track relationships
among stimuli and responses, but to anticipate the likely effects of our interventions
upon the world (see. Lake et al. 2016 for this distinction between associative pattern-rec-
ognition and modelling the world). This is clear, for example, in Craik’s overview of the
benefits that a small-scale model of external reality would confer: namely, capacities to
“try out various alternatives, conclude which is the best of them, react to future situations
before they arise, utilise the knowledge of past events in dealing with the present and future,
and … react in a much fuller, safer, and more competent manner to the emergencies which
face it” (61).
The relevant inference, then, is not from prediction as such to mental representation, but
to the kind of counterfactual competence exhibited in both the predictive capacities of
science and engineering and – Craik conjectures – our own nervous systems. Specifically,
human and animal intelligence is not exclusively reactive in character. Although in some
sense conditioned systems exploit predictive capacities, these capacities are a function of
past associations actually encountered. As Clark (2016, 254) put it, such systems are “con-
demned to repeat the past, releasing previously reinforced actions when circumstances
dictate.” By contrast, Craik contends that thought is “constructive” (50). We have foresight
not just with respect to the actual world – how it will unfold – but across possible worlds –
how the world would unfold under different conditions and potential interventions.11 And it
this predictive capacity – to the extent that we and other animals possess it – that mandates
internal models.
Combining the first two of Craik’s major insights, then, one gets an understanding of
how the attribution of mental representations can be conceptually coherent, consistent
with the brain-basis of cognition, and positively demanded by the kinds of capacities
that we exhibit. Before turning to why I think these insights are relevant today; however,
I turn to what I think is an important third insight – a specific approach to what metaphor
we should use to understand brain function.

4. The brain as an engineer, not a scientist


The analogies that Craik draws between science and the brain might suggest a commitment
to an extremely influential metaphor both in psychology and philosophy for understanding
brain function: the brain as a scientist (see Gregory 1980; Hohwy 2013).
Predictive Minds and Small-Scale Models 255

The origins of this metaphor can probably be traced back at least to Helmholtz (e.g.
1867). For Helmholtz, the brain is in roughly the same position as a scientist seeking to
infer the objective hidden state of the world from observations. Specifically, the obser-
vations are the patterns of proximal stimulation the brain receives, and it must draw on
such evidence and its prior knowledge to reliably infer their most probable causes – the
hidden structure of the world responsible for such data. As such, the brain must effectively
function like a hypothesis-testing machine, formulating hypotheses about states of the
world and testing them against incoming data. This metaphor for understanding brain func-
tion has been extended throughout the twentieth and twenty-first century. It is found, for
example, both in a vision of the learning “child as a scientist” (Gopnik 1996) and in the
extremely influential emergence of work on the “Bayesian brain” (Knill and Pouget
2004; see below). As this brief overview presents, it is really a metaphor of the brain as
an idealised impartial scientist – that is, a scientist concerned only with attaining the
truth and evaluating evidence in a maximally impartial way, and thus a scientist with no
counterpart in the actual world.
The brain as scientist metaphor has been extremely influential and fruitful in guiding
research. Despite this, it confronts several powerful critiques. The first is evidential:
people (or animals) do not seem to act like impartial scientists (Pinker 2005). The
second is more theoretical: insofar as biological agents evolve because of their felicity
for reproducing their genetic material, we should expect their behaviour to be guided pri-
marily by practical, not epistemic, functions (Tooby and Cosmides 2015). The brain is a
metabolically expensive organ. This gives us good reason to expect “narcissistic nervous
systems” – that is, nervous systems whose responsiveness to the environment is mediated
through the relevant organism’s idiosyncratic practical interests (Akins 1996; Williams
2018a).
Some have taken such considerations to weigh heavily against any vision of the model-
building brain (e.g. Bruineberg, Kiverstein, and Rietveld 2016; Rorty 1989). I think it is
possible to see Craik as making a significant contribution to this debate, however. He
shares with the brain as scientist metaphor the notion that the brain must genuinely
model the structure of the ambient environment if the organism of which it is a part is to
function effectively within it. Nevertheless, it is also clear that he does not view represen-
tation as the purpose of brains or systems within them. He recommends that we consider
“the growth of symbolising power from … [considerations] of survival-value” (60) and
argues that the brain’s models should be answerable to norms of “cheapness, speed, and
convenience” (52). Given this, for Craik, the appropriate analogy for approaching brain
function is something much closer to a metaphor of the brain as an engineer, not a scientist
– that is, a system that must model the world not to understand it, but to predict and control
it in the service of independently given practical ends.12

(Insight #3) An appropriate metaphor for understanding brain function is the engineer, not a
scientist.

This perspective reflects Craik’s background in control theory (see Gregory 1983), and it
comes across in the analogies that he draws on to support his hypothesis. For example,
the chief things he appeals to in order to illustrate the value of models are the need to
build a bridge, to shoot down enemy aircraft, and to effectively anticipate the tides. Like-
wise, he appeals to the “advances of modern technology” and the “instruments which [have]
extended the scope of our sense-organs, our brains or our limbs” as the relevant analogies
for understanding brain function (61, my emphasis). Given such pragmatic functions, we
256 Daniel Williams

should not expect wholly “observer-independent” or “neutrally specified” (Anderson 2014)


models. Rather, Craik emphasises that the brain’s model use is “bound somewhere to break
down … ” (53) – to fail to capture every aspect of reality. In this respect, of course, the
brain’s models are no different from any of the other models used in engineering (Giere
2004). They are good enough for the job, not perfect recapitulations of the structure of
reality.

5. Craik and the predictive mind


If I am right, Craik’s hypothesis on the nature of thought provides a framework for under-
standing mental representation founded on a commitment to structural models that facilitate
prediction and intervention. Given the intellectual climate in which he wrote, this constella-
tion of ideas was extremely prescient. Nevertheless, such ideas were also speculative and
highly schematic. Although Craik presents a compelling case for construing the brain as
a predictive modelling engine, he did not explain in any detail how the brain might config-
ure itself to function in this way, and some of the analog technologies he appealed to as
suggestions for how to understand mental representation can look extremely dated today
(Gregory 1983).
As I noted in the introduction, however, recent years have seen a remarkable emergence
of work that puts flesh on many of Craik’s schematic proposals: an efflorescence of work on
prediction as one of – if not the – fundamental functions of the neocortex (Hawkins and
Blakeslee 2004; Trappenberg 2010); an understanding of mental representations as neurally
realised generative models that recapitulate the causal structure of the environment (Kiefer
and Hohwy 2017; Lake et al. 2016; Tenenbaum et al. 2011); and broadly control-theoretic
perspectives on brain function (Seth 2015; Sterling and Laughlin 2015). Nowhere is this
contemporary embodiment of Craik’s views more visible than in predictive processing, a
recently emerging theory in cognitive and computational neuroscience that models the
brain as a “prediction machine” (Clark 2013), continually striving to align top-down pre-
dictions of its sensory inputs with the incoming sensory inputs themselves – that is, conti-
nually striving to minimise prediction error (see Clark 2016; Friston et al. 2017; Hohwy
2013; Seth 2015).
It would be impossible to provide an overview of predictive processing here (for intro-
ductions, see Clark 2013; Hohwy 2013; Williams 2018b). Instead, I want to illustrate how
Craik’s vision of mental representation provides an interesting perspective on a recent
debate concerning to what extent predictive processing is a representational theory of cog-
nition – to what extent predictive brains are representational brains (Allen and Friston 2016;
Anderson 2017; Bruineberg, Kiverstein, and Rietveld 2016; Clark 2016; Dołęga 2017; Gal-
lagher and Allen 2016; Gładziejewski 2015; Hutto 2017; Kiefer and Hohwy 2017; Kirchh-
off 2017; Williams 2017).

5.1. Are predictive brains representational brains?


On one side of this debate, several theorists situate predictive processing as the cognitive
neuroscientific heir of Helmholtz’s vision of the brain as a hypothesis-testing mechanism
(Clark 2013; Hohwy 2013; Kiefer and Hohwy 2017). On this view, prediction error mini-
misation amounts to testing truth-evaluable hypotheses about states of the world against
incoming evidence. The prediction errors generated by incorrect hypotheses are then
exploited as a feedback signal to either recruit better hypotheses concerning the structure
of the distal environment or to learn a better model from which those hypotheses are
Predictive Minds and Small-Scale Models 257

generated. Sub-personal anticipatory neuronal dynamics of a kind thought to be ubiquitous


in the animal world are thus understood as full-blown truth-conditional hypotheses.
Advocates of this perspective on predictive processing contend that it provides the only
way of capturing both the explanatory role of the generative model – the neural structure
from which top-down predictions of sensory inputs are issued – and the fact that prediction
error minimisation can be cast as (approximate) Bayesian inference (Clark 2016; Kiefer and
Hohwy 2017).
On the other side of this debate, several authors have argued that this representational
interpretation of predictive processing is at best an unnecessary overlay fostered on antici-
patory neuronal dynamics that need not be understood in terms of representational content
at all (Anderson and Chemero 2013; Bruineberg, Kiverstein, and Rietveld 2016; Hutto
2017). Advocates of this non-representationalist perspective contend that representational-
ists have no viable theory of content determination to explain how top-down predictions
acquire truth-conditional content, point out that the term “model” is used in an extremely
liberal way, and argue that the Helmholtzian perspective positively mischaracterises the
real function of prediction error minimisation. Specifically, its function is not to enable
brains to learn the objective structure of reality, but homeostatic self-organisation. As Brui-
neberg, Kiverstein, and Rietveld (2016, 14) put it, “if my brain is a scientist, it is a crooked
and fraudulent scientist.”

5.2. Craik’s contribution to the representation wars


In this debate, I think it is possible to see Craik as offering an attractive middle-path
between these two extremes.
Craik’s understanding of the mind clearly shares with representationalists a vision of
the brain as a modelling engine. Specifically, insofar as predictive brains exploit internal
structures that recapitulate the causal-statistical structure of the ambient environment to
facilitate predictive control (Gładziejewski 2015), Craik will concede that they are gen-
uinely representational – that such structures just are models. Further, the relevant kind
of prediction involved in such processes is best understood in terms of counterfactual
competence. Specifically, the generative model at the theoretical core of predictive pro-
cessing is precisely a structure capable of generating the most likely sensory inputs for
a large range of possible environmental causes. Only by capturing the generative struc-
ture of the world in this way can it effectively anticipate sensory inputs (Kiefer and
Hohwy 2017).
On the other hand, Craik’s approach to mental representation can accommodate many
of the grievances advanced by non-representationalists in this debate. First, I noted above
that Craik’s understanding of mental models does not require that they implicate truth-eva-
luable hypotheses about states of the world – that they trade in satisfaction conditions of any
kind. As such, Craik’s vision of the mind as a predictive modelling engine is consistent with
a philosophical framework that treats representational content with suspicion for the pur-
poses of cognitive theorising (e.g. Hutto and Myin 2013). Second, Craik’s approach
equally accommodates the anti-representationalist hostility to a Helmholtzian conception
of brain function. The function of predictive brains is not to discover the objective structure
of the world, but to facilitate a fundamentally pragmatic function.13
The upshot is a framework capable of capturing insights from both sides of the represen-
tation wars – one that recognises a vision of the brain as a predictive modelling engine,
whilst honouring metaphysical qualms about content and the pragmatic character of predic-
tion error minimisation. Is anything lost in this attempted resolution?
258 Daniel Williams

A non-representationalist might argue that it concedes too much. For example, she
might argue that once we acknowledge the fundamentally pragmatic function of predictive
brains, we should not expect the kinds of “mirroring” relations implied by structural simi-
larity. To this, I think that an advocate of Craik’s approach has two available responses (see
Williams 2017). First, the relevant “relational structure” captured by the brain’s model need
not just concern the environment, but can – in fact, must – concern the relationships that
exist between the organism and its environment. Second, we should expect at least some
degree of structural similarity between the brain’s internal models and the world insofar
as the brain’s pragmatic function relies on prediction, understood as outlined in Section
2.2. Of course, the structural similarity will be imperfect and idealised – but this does
not distinguish predictive models in the brain from models used in science and engineering
more generally (Giere 2004).
In addition, a non-representationalist might argue that the concept of internal (genera-
tive) modelling without truth-conditional hypotheses (or accuracy conditions of some kind)
is simply incoherent. As noted above, however, I strongly doubt that this is true if regarded
as a conceptual claim, and it is not a useful stipulation either: structures that guide effective
predictive control by recapitulating – that is, re-presenting – the structure of a domain share
enough similarity to the core functional profile of public models to warrant terms like
“model” or “representation.”
From the other direction, representationalists might also argue that Craik’s framework
concedes too much – albeit in the opposite direction. For example, one might argue that the
problem of content determination – the “hard problem of content” – is solvable in the case
of predictive processing (see, e.g. Kiefer and Hohwy 2017). From the position that I am
defending here, however, this is not an objection. If we can explain how generative
models acquire truth-conditional hypotheses, that would be fascinating. What Craik’s theor-
etical approach to mental representation shows us, however, is that we can still view pre-
dictive processing in representational terms without solving the “hard problem of
content” (Hutto and Myin 2013) – that there is an important sense in which the generative
model functions as a representation independently of such questions.
Perhaps a more difficult objection would be that the interpretation I am offering here is
in tension with the Bayesian description of predictive processing. Bayesian inference, after
all, is inference, and thus presumably must be contentful.
This objection is too quick, however. First, there is no reason why the generative
model cannot capture statistical regularities in addition to causal structure, and subject
these internally re-presented causal-statistical regularities to different kinds of use.
Second, like all mathematical frameworks or computational procedures, Bayes’s
theorem can be viewed in purely formal terms in a way that abstracts away from
content (Egan 2013). Thus content is strictly speaking not a necessary component of
a Bayesian interpretation. Bayes’ theorem can equally well specify both the nature of
belief-updating in human beings and the way in which parameters are updated within
a control system (Anderson 2017). Finally, one must distinguish between us modelling
a system’s performance in terms of Bayesian inference and its actual neuronal dynamics
implementing Bayesian inference. For example, Friston’s (2010) free-energy principle
allows us to model a thermostat’s regulative behaviour in terms of (approximate) Baye-
sian inference. But clearly, this framework does not help us understand either how to
build a thermostat or how they work. In other words, it need not imply that mechanisms
within the thermostat are Bayesian.
Of course, these remarks are far from conclusive. Much more work is required to show
that Craik’s approach to the nature of mental representation can genuinely accommodate
Predictive Minds and Small-Scale Models 259

what is attractive in both representationalist and non-representationalist treatments of pre-


dictive processing. Nevertheless, I hope that I have done enough to show that this is work
worth undertaking – that there is conceptual space for a vision of the mind as a contentless
predictive modelling engine, and that this vision deserves a seat in current debates sur-
rounding the representational status of the predictive mind.

6. Conclusion
Craik’s hypothesis on the nature of thought contains invaluable but little-known insights for
how to understand the nature of mental representation. These insights give rise to a vision of
the mind as a predictive modelling engine – a vision in which mental representations func-
tion as structural models that facilitate prediction in the service of fundamentally pragmatic
ends. I hope that I have done enough in this paper to illustrate both the prescience and con-
temporary relevance of this vision.

Acknowledgments
The author would like to thank Richard Holton and members of the Cognition & Philos-
ophy Laboratory at Monash University for helpful discussion, especially Jakob Hohwy,
Andrew Corcoran, Stephen Gadsby, Julian Matthews, and Kelsey Palghat. He would
also like to thank two anonymous reviewers for helpful comments and suggestions on an
earlier version of this manuscript.

Disclosure statement
No potential conflict of interest was reported by the author.

Funding
This work was supported by the Arts and Humanities Research Council.

Notes
1. All references that only contain page numbers are to Craik (1943).
2. See Godfrey-Smith (2006a) and Ramsey (2007) for recent expressions of the view that our
understanding of mental representation is – or should be – modelled on our understanding of
the functional role of public representations.
3. Importantly, the models that I focus on here are representational models (sometimes called
models of phenomena) – namely, models that represent the structure of target domains (Giere
2004). These include things like maps, scale models, and the mathematical models used in
science (Giere 2004; Godfrey-Smith 2006b). These contrast both with models of data, and
with models understood as set-theoretical structures that provide an interpretation of a set of
uninterpreted axioms, which is the sense used in the “semantic” view of scientific theories
(see Roman and Hartmann 2017). The assumption that representational models capitalize on
structural similarity is widespread in the literature (Giere 2004; Godfrey-Smith 2006b;
Ramsey 2007; Ryder 2004). I thank an anonymous reviewer for suggesting that I clarify
these points.
4. Ramsey (2007; 2016) has defended the view that there is an important question concerning what
it means to function as a representation that is conceptually independent of questions concerning
representational content. Further, he also defends the view that cognitive representations can be
understood as functional analogues of public models. What distinguishes Ramsey from Craik on
my reading, however, is that Ramsey still thinks that representational content is a necessary
feature of cognitive representation (Ramsey 2016; although see 6 for some wavering on this
260 Daniel Williams

issue). For Craik (on my reading), by contrast, mental modelling occurs when an organism
exploits the structural similarity between its neural states and the world to guide prediction
and intervention. It is not necessary that such models also be answerable to how things are.
5. In this respect, Craik’s understanding of models resembles the account of models advanced by
control theorists and cyberneticists writing at or shortly after his work. For example, Conant and
Ashby (1970, 89) famously claim that “the living brain, so far as it is to be successful and effi-
cient as a regulator for survival, must proceed … by the formation of a model (or models) of its
environment,” and they understand models as physical structures whose isomorphism to a
domain is exploited in controlling that domain (see Ashby 1952; 1956). There are two important
differences, however: first, whereas Conant and Ashby seem to require isomorphism, Craik
requires a weaker and more graded notion of structural similarity (see Giere 2004 for the dis-
tinction; and Section 4 below); second, Craik’s focus is squarely on prediction and foresight,
not regulation (see Section 3 below, as well as footnote 7).
6. This is one way to interpret Clark’s (2016, 293) view that predictive brains should be understood
as representational brains, although his focus is more squarely on content.
7. Although Craik’s focus on control (and background in control theory) make him an important
precursor to the later emphasis among cyberneticists on the link between regulation and model-
building (see Ashby 1952; 1956; see Gregory 1983), Craik’s emphasis on prediction as the chief
function of thought does distinguish his account from the focus on regulation common to the
cybernetics tradition. The difference is one of degree, as cyberneticists did focus on predictive
control, but it is real nonetheless. Specifically, the regulation applies to all biological systems
(and certain non-biological systems like thermostats), whereas Craik’s focus on prediction
and foresight applies much more narrowly to the competences of complex animals. Unfortu-
nately, space constraints prohibit me from addressing this topic at greater length.
8. A similar point is made in the context of discussions of “model-free learning” (Dayan 2012).
9. Lake et al. (2016), Sloman (2005), and Tenenbaum et al. (2011) all make similar points, albeit
all using different vocabulary.
10. Kiefer and Hohwy (2017) appeal to the counterfactual structure of generative models within
predictive processing to provide a possible world semantics for them. As I note below
(Section 5), the account I outline here is consistent with such interesting proposals; the point
is just that we can understand the core functional profile of mental models without entering
the domain of naturalistic psychosemantics at all.
11. An anonymous reviewer helpfully points out the difference between what they call “counterfac-
tual competence” and “decoupled counterfactual reasoning.” Anti-aircraft guns able to shoot at
aircrafts across a wide range of circumstances provide an example of the former, whereas
systems that can act on the basis of the hypothetical consequences of their actions provide an
instance of the latter. Nevertheless, I subsume both strategies as instances of counterfactual com-
petence because both ultimately rely on the predictive capacities outlined here. What dis-
tinguishes them is that the latter acts on predictions about the outcomes of an open-ended set
of its own possible actions, whereas the former acts directly on predictions about world
states themselves (e.g. moving aircraft). However, the distinction does usefully reveal that a
system’s counterfactual competence is a matter of degree along two independent dimensions:
first, the breadth of potential inputs over which its predictive capacities range; second, the com-
plexity of operations in can implement over its predictive outputs.
12. An anonymous reviewer complains that this metaphor is unhelpful: “I see how it pragmatizes epis-
temics, but do not see how it integrates purposes.” Importantly, however, the point of the metaphor
of engineering is that the utilisation of models and other tools occurs in the service of indepen-
dently given practical ends (see Blockley 2012). In the case of organisms, these ends ultimately
derive from evolutionarily endowed goals. Craik’s insight is that the goals of some organisms –
but not all – require the kinds of predictive capacities underpinned by internal models. Goals are
thus exogeneous to the brain’s “engineer,” just as they are for actual engineers.
13. An anonymous reviewer writes that this pragmatic character of prediction error minimization
“does not mean that the system does not approximate truth” (see Dołęga 2017). Insofar as
this means that successful prediction error minimization requires an internal model that genu-
inely recapitulates the structure of the ambient environment, Craik would agree. Structural simi-
larity is not isomorphism (i.e. perfect similarity), however, and the pragmatic perspective here
suggests that how much similarity obtains will be determined by the organism’s idiosyncratic
practical ends.
Predictive Minds and Small-Scale Models 261

Notes on contributor
Daniel Williams is a doctoral student in the Philosophy Faculty at the University of Cambridge.

ORCID
Daniel Williams http://orcid.org/0000-0002-9774-2910

References
Akins, K. 1996. “Of Sensory Systems and the ‘Aboutness’ of Mental States.” Journal of Philosophy
93 (7): 337–372. doi:10.2307/2941125.
Allen, M., and K. Friston. 2016. “From Cognitivism to Autopoiesis: Towards a Computational
Framework for the Embodied Mind.” Synthese. doi:10.1007/s11229-016-1288-5.
Anderson, M. 2003. “Embodied Cognition: A Field Guide.” Artificial Intelligence 149 (1): 91–130.
doi:10.1016/s0004-3702(03)00054-7.
Anderson, M. 2014. After Phrenology. Cambridge, MA: MIT Press.
Anderson, M. 2017. “Of Bayes and Bullets: An Embodied, Situated, Targeting-based Account of
Predictive Processing.” In Philosophy and Predictive Processing: 4, edited by T. Metzinger
and W. Wiese. Frankfurt am Main: MIND Group. doi:10.15502/9783958573055.
Anderson, M., and T. Chemero. 2013. “The Problem with Brain GUTs: Conflation of Different Senses
of ‘Prediction’ Threatens Metaphysical Disaster.” Behavioral and Brain Sciences 36 (03): 204–
205. doi:10.1017/s0140525×1200221x.
Aristotle. (1984). De Anima and De Sensu, Vol. 1 of The Complete Works of Aristotle: Revised Oxford
Translation (2 Vols), edited by S. Barnes. Princeton, NJ: Princeton University Press.
Ashby, W. R. 1952. Design for a Brain. London: Chapman and Hall.
Ashby, W. R. 1956. An Introduction to Cybernetics. London: Chapman and Hall.
Barr, M. 2011. Predictions in the Brain. Oxford: Oxford University Press.
Barrett, L. F. 2017. “The Theory of Constructed Emotion: An Active Inference Account of
Interoception and Categorization.” Social Cognitive and Affective Neuroscience. doi:10.1093/
scan/nsw154.
Bechtel, W. 2008. Mental Mechanisms. Hoboken, NJ: Taylor and Francis.
Blockley, D. 2012. Engineering: A Very Short Introduction. Oxford: Oxford University Press.
Brooks, R. 1999. Cambrian Intelligence. Cambridge, MA: MIT Press.
Bruineberg, J., J. Kiverstein, and E. Rietveld. 2016. “The Anticipating Brain is not a Scientist: The
Free-energy Principle from an Ecological-enactive Perspective.” Synthese. doi:10.1007/s11229-
016-1239-1.
Bubic, A., D. Y. von Cramon, and R. I. Schubotz. 2010. “Prediction, Cognition and the Brain.”
Frontiers in Human Neuroscience. doi:10.3389/fnhum.2010.00025.
Chemero, A. 2009. Radical Embodied Cognitive Science. Cambridge, MA: MIT Press.
Clark, A. 2013. “Whatever Next? Predictive Brains, Situated Agents, and the Future of Cognitive
Science.” Behavioral and Brain Sciences 36 (03): 181–204. doi:10.1017/s0140525×12000477.
Clark, A. 2015. “Predicting Peace: The End of the Representation Wars – A Reply to Michael
Madary.” In Open MIND: 7(R), edited by T. Metzinger and J. M. Windt. Frankfurt am Main:
MIND Group. doi:10 15502/9783958570979
Clark, A. 2016. Surfing Uncertainty. Oxford: Oxford University Press.
Clark, A., and J. Toribio. 1994. “Doing Without Representing?” Synthese 101 (3): 401–431. doi:10.
1007/bf01063896.
Conant, R., and W. R. Ashby. 1970. “Every Good Regulator of a System Must be a Model of that
System.” International Journal of Systems Science 1 (2): 89–97. doi:10.1080/
00207727008920220.
Craik, K. 1943. The Nature of Explanation. Cambridge: Cambridge University Press.
Cummins, R. 1989. Meaning and Mental Representation. Cambridge, MA: MIT Press.
Cummins, R. 1996. Representations, Targets, and Attitudes. Cambridge, MA: MIT Press.
Dayan, P. 2012. “How to Set the Switches on This Thing.” Current Opinion in Neurobiology 22:
1068–1074.
Dennett, D. 1978. Brainstorms. Cambridge, MA: MIT Press.
262 Daniel Williams

Dołęga, K. 2017. “Moderate Predictive Processing.” In Philosophy and Predictive Processing: 10,
edited by T. Metzinger and W. Wiese. Frankfurt am Main: MIND Group. doi:10.15502/
9783958573116
Egan, F. 2013. “How to Think About Mental Content.” Philosophical Studies 170 (1): 115–135.
doi:10.1007/s11098-013-0172-0.
Fodor, J. 1987. Psychosemantics: The Problem of Meaning in the Philosophy of Mind. Cambridge,
MA: MIT Press.
Friston, K. 2010. “The Free-energy Principle: A Unified Brain Theory?” Nature Reviews
Neuroscience 11 (2): 127–138. doi:10.1038/nrn2787.
Friston, K., T. FitzGerald, F. Rigoli, P. Schwartenbeck, and G. Pezzulo. 2017. “Active Inference: A
Process Theory.” Neural Computation 29 (1): 1–49. doi:10.1162/neco_a_00912.
Gallagher, S., and M. Allen. 2016. “Active Inference, Enactivism and the Hermeneutics of Social
Cognition.” Synthese. doi:10.1007/s11229-016-1269-8.
Giere, R. 2004. “How Models Are Used to Represent Reality.” Philosophy of Science 71 (5): 742–
752. doi:10.1086/425063.
Gładziejewski, P. 2015. “Predictive Coding and Representationalism.” Synthese 193 (2): 559–582.
doi:10.1007/s11229-015-0762-9.
Godfrey-Smith, P. 2006a. “Mental Representation, Naturalism, and Teleosemantics.” In
Teleosemantics, edited by G. MacDonald and D. Papineau, 42–68. Oxford: Oxford University
Press.
Godfrey-Smith, P. 2006b. “The Strategy of Model-based Science.” Biology & Philosophy 21 (5):
725–740. doi:10.1007/s10539-006-9054-6.
Godfrey-Smith, P. forthcoming. “Dewey and Anti-representationalism.” In Oxford Handbook of
Dewey, edited by S. Fesmire. Oxford: Oxford University Press.
Gopnik, A. 1996. “The Scientist as Child.” Philosophy of Science 63 (4): 485–514. doi:10.1086/
289970.
Gregory, R. 1980. “Perceptions as Hypotheses.” Philosophical Transactions of the Royal Society B:
Biological Sciences 290 (1038): 181–197. doi:10.1098/rstb.1980.0090.
Gregory, R. 1983. “Forty Years On: Kenneth Craik’s The Nature of Explanation (1943).” Perception
12 (3): 233–237. doi:10.1068/p120233.
Grush, R. 2004. “The Emulation Theory of Representation: Motor Control, Imagery, and Perception.”
Behavioral And Brain Sciences 27 (03). doi:10.1017/s0140525×04000093.
Hawkins, J., and S. Blakeslee. 2004. On Intelligence. New York: Henry Holt and Company.
Helmholtz, H. von. 1867. Handbuch der Physiologischen Optik. Leipzig: Voss.
Hohwy, J. 2013. The Predictive Mind. Oxford: Oxford University Press.
Horst, S. 2016. Cognitive Pluralism. Cambridge, MA: MIT Press.
Hutto, D. 2017. “Getting Into Predictive Processing’s Great Guessing Game: Bootstrap Heaven or
Hell?” Synthese. doi:10.1007/s11229-017-1385-0.
Hutto, D., and E. Myin. 2013. Radicalizing Enactivism: Basic Minds Without Content. Cambridge:
MIT Press.
Isaac, A. 2013. “Objective Similarity and Mental Representation.” Australasian Journal of
Philosophy 91 (4): 683–704. doi:10.1080/00048402.2012.728233.
Kiefer, A., and J. Hohwy. 2017. “Content and Misrepresentation in Hierarchical Generative Models.”
Synthese, doi:10.1007/s11229-017-1435-7.
Kirchhoff, M. 2017. “Predictive Processing, Perceiving and Imagining: Is to Perceive to Imagine, or
Something Close to it?” Philosophical Studies. doi:10.1007/s11098-017-0891-8.
Knill, D., and A. Pouget. 2004. “The Bayesian Brain: the Role of Uncertainty in Neural
Coding and Computation.” Trends In Neurosciences 27 (12): 712–719. doi:10.1016/j.tins.
2004.10.007.
Lake, B., T. Ullman, J. Tenenbaum, and S. Gershman. 2016. “Building Machines That Learn and
Think Like People.” Behavioral And Brain Sciences (40). doi:10.1017/s0140525 × 16001837.
Llinás, R. 2001. I of the vortex. Cambridge: MIT Press.
Pearl, J. 2009. Causality. Cambridge: Cambridge University Press.
Pinker, S. 2005. “So How Does the Mind Work?” Mind and Language 20 (1): 1–24. doi:/10.1111/j.
0268-1064.2005.00274.x.
Ramsey, W. 2007. Representation Reconsidered. Cambridge: Cambridge University Press.
Ramsey, W. 2016. “Untangling Two Questions About Mental Representation.” New Ideas In
Psychology 40: 3–12. doi:10.1016/j.newideapsych.2015.01.004.
Predictive Minds and Small-Scale Models 263

Rescorla, M. 2016. “Bayesian Sensorimotor Psychology.” Mind & Language 31 (1): 3–36. doi:10.
1111/mila.12093.
Roman, F., and S. Hartmann. 2017. Models in Science, The Stanford Encyclopedia of Philosophy
(Spring 2017 Ed.), edited by Edward N. Zalta. https://plato.stanford.edu/archives/spr2017/
entries/models-science/.
Rorty, R. 1989. Contingency, Irony, and Solidarity. Cambridge: Cambridge University Press.
Ryder, D. 2004. “SINBAD Neurosemantics: A Theory of Mental Representation.” Mind And
Language 19 (2): 211–240. doi:10.1111/j.1468-0017.2004.00255.x.
Ryle, G. 1949. The Concept of Mind. Chicago: University of Chicago Press.
Seth, A. K. 2015. “The Cybernetic Bayesian Brain – From Interoceptive Inference to Sensorimotor
Contingencies.” In Open MIND: 35(T), edited by T. Metzinger and J. M. Windt. Frankfurt am
Main: MIND Group. doi:10.15502/9783958570108.
Shea, N. 2014. “VI – Exploitable Isomorphism and Structural Representations.” Proceedings of the
Aristotelian Society 114 (2pt2): 123–144. doi:10.1111/j.1467-9264.2014.00367.x.
Skinner, B. 1938. The Behavior of Organisms. Acton, MA: Copley Publishing Group.
Skinner, B. 1971. Beyond Freedom and Dignity. Del Mar, CA: Communications Research Machines.
Sloman, S. 2005. Causal Models. New York: Oxford University Press.
Sterling, P., and S. Laughlin. 2015. Principles of Neural Design. Cambridge: MIT Press.
Tenenbaum, J., C. Kemp, T. Griffiths, and N. Goodman. 2011. “How to Grow a Mind: Statistics,
Structure, and Abstraction.” Science 331 (6022): 1279–1285. doi:10.1126/science.1192788.
Tooby, J., and L. Cosmides. 2015. “The Theoretical Foundations of Evolutionary Psychology.” In The
Handbook of Evolutionary Psychology, Second Edition. Volume 1: Foundations, edited by D. M.
Buss, 3–87. Hoboken, NJ: John Wiley & Sons.
Trappenberg, T. 2010. Fundamentals of Computational Neuroscience. Oxford: Oxford University
Press.
Von Eckardt, B. 2012. “The Representational Theory of Mind.” In The Cambridge Handbook of
Cognitive Science, edited by K. Frankish and W. Ramsey, 1st ed.29–50. Cambridge:
Cambridge University Press.
Waskan, J. 2006. Models and Cognition. Cambridge, MA: MIT Press.
Watson, J. 1913. “Psychology as the Behaviorist Views it.” Psychological Review 20 (2): 158–177.
doi:10.1037/h0074428.
Williams, D. 2017. “Predictive Processing and the Representation Wars.” Minds and Machines 28 (1):
141–172. doi:10.1007/s11023-017-9441-6.
Williams, D. 2018a. “Pragmatism and the Predictive Mind.” Phenomenology and the Cognitive
Sciences, doi:10.1007/s11097-017-9556-5.
Williams, D. 2018b. “Predictive Coding and Thought.” Synthese. doi:10.1007/s11229-018-1768-x.
Williams, D., and L. Colling. 2017. “From Symbols to Icons: the Return of Resemblance in the
Cognitive Neuroscience Revolution.” Synthese. doi:10.1007/s11229-017-1578-6.
Wilson, M. 2002. “Six Views of Embodied Cognition.” Psychonomic Bulletin and Review 9 (4): 625–
636. doi:10.3758/bf03196322.
Wittgenstein, L. 1953. Philosophical Investigations. Oxford: Basil Blackwell.

Vous aimerez peut-être aussi